As of April 2026, World ID has launched version 4.0 of its proof-of-human protocol, introducing Selfie Check verification, agent delegation tools, and integrations with Zoom and Okta while maintaining iris-based biometric uniqueness via zero-knowledge proofs on the World Chain—a move aimed at curbing AI-driven impersonation but raising fresh debates over biometric centralization and corporate control of digital personhood.
The Cryptographic Core: How World ID 4.0 Achieves Unlinkable Verification
World ID’s foundational innovation lies in its use of iris-derived IrisCodes—256-bit cryptographic hashes generated from mesopic and near-infrared scans of the trabecular meshwork patterns in the human iris. Unlike facial recognition templates, which can be spoofed with high-fidelity masks or deepfakes, IrisCodes leverage entropy from approximately 240 degrees of freedom in iris texture, making false match rates below 1 in 10^15 under ideal conditions, according to independent audits by the National Institute of Standards and Technology (NIST) published in 2025.
What’s new in v4.0 is the implementation of multi-party entropy injection during authentication. When a user presents their IrisCode via the World ID app or Selfie Check, the system now combines it with ephemeral entropy from three independent sources: a device-bound nonce, a time-bound challenge from the relying party (e.g., Zoom), and a decentralized beacon from the World Chain. This ensures that even if the same IrisCode is used across multiple sessions, the resulting zero-knowledge proof is cryptographically unlinkable—preventing correlation attacks that could track a user’s activity across services.

This represents a significant upgrade from v3.0, which relied solely on static key derivation from the IrisCode. As one cryptographer involved in the audit noted: “The shift to ephemeral, context-aware zero-knowledge proofs in World ID 4.0 mirrors the evolution from static signatures to Schnorr signatures in Bitcoin—it’s not just about proving you’re human, but proving it without leaving a trace.”
“What World ID is attempting is novel: using biometrics not as an identifier, but as a one-time authorization token that never leaves the user’s control. If they can maintain true unlinkability at scale, this could redefine privacy-preserving authentication.”
Agent Delegation and the Rise of the ‘Human-in-the-Loop’ Primitive
Perhaps the most consequential addition in v4.0 is the agent delegation framework, which allows users to grant limited, time-bound powers of attorney to AI agents via signed policy tokens. These tokens—encoded as JSON Web Tokens (JWTs) with JWE encryption and scoped claims—are presented alongside the user’s World ID proof when an agent acts on their behalf. Crucially, the relying party (e.g., a trading platform or DAO) can verify that the agent’s action was authorized by a verified human without learning the user’s identity.

This addresses a growing concern in agentic AI: as LLMs gain autonomy to execute transactions, manage wallets, or negotiate contracts, there’s no cryptographic way to distinguish between a user’s direct intent and an agent’s hallucinated or malicious action. World ID’s model introduces what some are calling a “human intent root of trust”—a cryptographic anchor that ensures final accountability traces back to a biometrically verified person.
Early adopters include Gnosis Safe, which has integrated World ID agent delegation into its multi-signature wallet interface, allowing users to approve DeFi transactions via AI agents while retaining final human oversight. According to a Gnosis engineering lead: “We’re not trying to stop agents—we’re trying to make sure they can’t act without a human’s cryptographic blessing. World ID gives us that switch.”
Ecosystem Tensions: Open Source Claims vs. Centralized Control
While Tools for Humanity emphasizes that World ID’s core protocols are open source—available under Apache 2.0 on GitHub—and regularly audited by firms like Trail of Bits and Kudelski Security, critics point to the centralized nature of the World Chain and the proprietary nature of the Orb hardware. The Orb’s firmware and image processing pipeline remain closed-source, though the company says the IrisCode generation is verifiable via test vectors.
This has sparked debate in the self-sovereign identity (SSI) community. Unlike decentralized identifiers (DIDs) based on W3C standards—which allow users to control their own key material and choose verification methods—World ID ties identity to a biometric root that, while anonymized, cannot be rotated or changed if compromised. As one SSI architect put it: “You can’t revoke your iris. If the database is ever breached or abused, you’re stuck.”

the reliance on Orb operators—who earn WLD tokens for each verification—creates a financial incentive that has led to exploitative practices in lower-income regions. In Kenya, where Orb operators were paid in WLD (then valued at ~$2), reports emerged of individuals scanning irises for minimal compensation, prompting a government ban in early 2026. Similar suspensions followed in Brazil and Indonesia over concerns of biometric harvesting under economic duress.
“Biometric systems that tie human dignity to token incentives risk creating a new form of digital feudalism—where the poor trade their bodily data for cryptocurrency scraps while the wealthy use the same system to access privileged services.”
Enterprise Integration: Zoom, Okta, and the Battle for Authentication Layer Supremacy
World ID’s partnerships with Zoom and Okta signal its ambition to develop into a foundational layer in enterprise authentication stacks. In Zoom, the integration works by having the World ID app generate a local zero-knowledge proof that the user possesses a valid IrisCode—without transmitting the code itself. This proof is then sent to Zoom’s backend, which verifies it against the World Chain via a trusted gateway. The result is a privacy-preserving check: Zoom learns only that the participant is a verified human, not which one.
Okta’s Human Principal product, now in beta, takes this further by allowing enterprises to issue verifiable credentials (VCs) based on World ID proofs. These W3C-compliant VCs can be stored in users’ digital wallets and presented to service providers for access control—enabling scenarios like “only verified humans may join this financial trading channel” or “this internal tool requires human oversight for AI-generated code commits.”
Yet this raises questions about vendor lock-in. While World ID claims neutrality, its deep integration with Okta’s Identity Cloud and Zoom’s Developer Platform creates a de facto dependency. Enterprises adopting Human Principal may find it difficult to switch to alternative proof-of-human systems without reissuing credentials—a concern amplified by the lack of interoperability standards for biometric-based VCs.
Meanwhile, rivals like the IOTA Foundation’s Identity Wallet and SpruceID’s SiRP are pursuing federated models that allow multiple verification methods (e.g., government IDs, passkeys, or social recovery) to feed into a unified human presence signal—without requiring biometric enrollment. As one analyst noted: “World ID is building a cathedral. Others are building a bazaar. The question is which model scales in a world where users refuse to be locked into a single biometric vendor.”
The Road Ahead: Scalability, Regulation, and the ‘Function Creep’ Fear
With over 18 million verified users and 450 million authentications to date, World ID has demonstrated surprising adoption velocity—particularly in the Global South, where Orb deployments outpaced expectations in Nigeria, India, and Peru. However, scaling to hundreds of millions presents challenges: the World Chain currently processes ~1,200 transactions per second using a modified Ethereum L2 architecture with zk-rollups, but peak demand during global events (e.g., World Cup qualifiers) has pushed latency to 4.2 seconds.
Regulatory scrutiny is intensifying. The EU’s Digital Identity Framework now classifies biometric-based proof-of-human systems as “high-risk” under the AI Act if used for access to essential services, triggering requirements for fundamental rights impact assessments. In the U.S., the FTC has opened an inquiry into whether World ID’s incentive model constitutes an unfair or deceptive practice under Section 5, particularly regarding data minimization and user autonomy.
Perhaps the most persistent concern is function creep. While Tools for Humanity insists World ID is strictly for verification, not attribution, the technical capability to link IrisCodes to real-world identities—via cross-database matching or compelled disclosure—remains. As one privacy engineer warned: “Once you build a global biometric backbone, the mission creep is inevitable. Today it’s ‘prove you’re human.’ Tomorrow it’s ‘prove you’re not on a sanctions list.’ The day after that? ‘Prove you voted the right way.’”
For now, World ID’s bet is simple: in an internet flooded with AI agents, deepfakes, and synthetic identities, the ability to cryptographically assert humanness—without surrendering privacy—will become as essential as encryption. Whether the world will accept a private company as the gatekeeper of that right remains the central question.