The courtroom air was thick with the kind of tension usually reserved for high-stakes political coups. Greg Brockman, the architect of OpenAI’s operational engine, didn’t just return to the witness stand on Tuesday; he returned with a narrative designed to dismantle the myth of Elon Musk as the benevolent founder of the AI revolution. What we witnessed wasn’t just a deposition of facts, but a visceral autopsy of a friendship that collapsed under the weight of ego, equity, and an insatiable hunger for computing power.
This isn’t merely a squabble over who gets which slice of the pie. This trial is a referendum on the “nonprofit” promise. At its core, the lawsuit—where Musk is chasing a staggering $100 billion in damages—asks whether OpenAI committed a fundamental betrayal of its founding charter or if Musk simply realized too late that he couldn’t bully a boardroom into total submission.
The $50 Billion Appetite for Intelligence
The most jarring revelation of the day wasn’t a whispered secret, but a number: $50 billion. Brockman testified that OpenAI’s projected compute costs for 2026 have ballooned to that astronomical figure, a dizzying leap from the $30 million the company spent in 2017. To position that in perspective, that is not just a budget increase; it is a total transformation of the economic scale of intelligence.
This spending spree reflects the brutal reality of the “compute war.” To train the next generation of frontier models, OpenAI isn’t just buying servers; they are essentially building a new category of industrial infrastructure. This aligns with broader industry whispers regarding Project Stargate, the rumored $100 billion supercomputer collaboration between Microsoft and OpenAI. The “nonprofit” dream died the moment the hardware requirements shifted from a few racks of GPUs to the energy consumption of a small city.
Brockman’s testimony painted a picture of early desperation, describing a hunt for funding that involved scouring the Forbes 500 for anyone with a pulse and a passion for AI. It reveals a critical truth about the current AI era: the barrier to entry is no longer just brilliant code, but the sheer ability to finance the silicon. In this environment, the “nonprofit” label became a luxury that the physics of LLMs simply couldn’t afford.
Whiskey, Confetti, and the 51% Ultimatum
Then there was the human drama—the kind of vivid, cinematic detail that makes a jury lean in. Brockman recounted a celebration at Musk’s “haunted mansion” in San Francisco following a 2017 victory where an OpenAI bot crushed a pro Dota 2 player. The scene was chaotic: confetti-strewn floors, remnants of “party carnage,” and Amber Heard serving whiskey.
But the celebration was a Trojan horse. Amidst the revelry, the conversation shifted to the company’s structure. Brockman testified that Musk didn’t just want a seat at the table; he wanted the table. Musk demanded a controlling 51% equity stake and the CEO title, justifying the grab by claiming he had “zero failures” in his history of starting multi-billion-dollar companies. The arrogance was palpable, even in Brockman’s recollection. Musk reportedly told the founders, “I can start another AI company tomorrow, like in one tweet.”
This moment marks the precise inflection point where the partnership fractured. When Brockman, Sam Altman, and Ilya Sutskever pushed back, suggesting that additional shares be bought at market price, the atmosphere shifted instantly. Brockman described a chilling transition in Musk’s demeanor—a sudden, angry silence followed by a flat “I decline.” The testimony suggests a volatile aftermath, with Brockman recalling a moment where he feared Musk might physically attack him as he stormed around the table.
The ‘Wolves’ of Mountain View and the Safety Myth
Perhaps the most intellectually gripping part of the testimony was the breakdown of Musk’s philosophy on AI safety. For years, Musk has positioned himself as the world’s primary alarmist regarding AGI risks. However, Brockman revealed a far more pragmatic, and perhaps cynical, internal monologue.
During a 2018 meeting, Musk allegedly told employees that he was resigning because OpenAI needed billions of dollars that only Tesla could provide. More tellingly, Brockman testified that Musk admitted he “would not work on safety” if he ran the project through Tesla. His goal was singular: catch up to Google’s DeepMind. In Musk’s own words, as recounted by Brockman, if the “sheep” (the safety-conscious) were dictating the pace while the “wolves” (the competitors) were not, the effort was pointless.
This revelation exposes a profound paradox in Musk’s public persona. While preaching caution to the masses, he was privately urging a “move fast and break things” approach to outpace the “wolves” at Google. It suggests that for Musk, safety was not a prerequisite, but a luxury to be addressed only after dominance was achieved.
A Legal Labyrinth: The Non-Profit Mirage
From a legal standpoint, this case hinges on the concept of “unjust enrichment.” Musk argues that he was tricked into funding a nonprofit that was always intended to be a profit machine for Altman and Brockman. However, the structural evolution of OpenAI—from a 501(c)(3) to a “capped-profit” entity—is a legal maneuver designed to attract capital while theoretically limiting the upside for investors.

“The transition from a pure nonprofit to a capped-profit model is a strategic hedge. It allows a company to access the venture capital necessary for massive compute costs while maintaining a legal tether to a charitable mission. The court’s challenge is deciding if that tether is a genuine leash or just a piece of decorative string.”
This sentiment, echoed by many corporate governance experts, highlights the fragility of OpenAI’s current structure. If the court finds that the founders intentionally deceived Musk about the company’s trajectory, the financial repercussions could be existential. But if the “capped-profit” model is seen as a necessary evolution for survival in the face of $50 billion compute bills, Musk’s claims of betrayal may appear like the grievances of a spurned partner who simply lost a power struggle.
As the trial progresses, we are seeing the blueprints of the AI industry’s power dynamics. The “showdown” isn’t just about who owns the equity; it’s about whether the pursuit of AGI can ever truly be decoupled from the pursuit of profit. When the cost of “thinking” costs $50 billion a year, the “nonprofit” ideal isn’t just challenged—it’s practically impossible.
The Big Question: Do you believe a company capable of creating AGI can ever truly remain a nonprofit, or is the sheer cost of the technology an inevitable slide toward corporate control? Let’s discuss in the comments.