Skip links

OpenAI is transitioning to a for-profit business. The stakes are enormous.

When it was founded in 2015, artificial intelligence research lab OpenAI was a nonprofit organization. The idealistic mission: to make sure the high-stakes work they were doing on artificial intelligence served the whole world. This was necessary because — according to the founders’ fervent belief, at least — it would transform the whole world.

In some ways since then, OpenAI has succeeded beyond its wildest dreams. “General artificial intelligence” sounded like a pipe dream in 2015, but today we have talking, interactive, creative AI that can pass most tests of human competence we’ve put it to. Many serious people believe that full general intelligence is just around the corner. OpenAI, which in the years since its founding morphed from a nonprofit lab into one of the most highly valued startups in history, has been at the center of that transformation. (Disclosure: Vox Media is one of several publishers that has signed partnership agreements with OpenAI. Our reporting remains editorially independent.)

In other ways, of course, things have been a bit of a mess. Even as it basically became a business, OpenAI used nonprofit governance to keep the company focused on its mission. OpenAI CEO Sam Altman reassured Congress he had no equity in the company, and the nonprofit board still held all authority to change course if they thought the company had gone astray from its mission.

But that ultimately put the board at odds with Altman last November in a messy conflict that the CEO ultimately won. Nearly the entire original leadership team departed. In the year since, the board has largely been replaced and high-profile employees have left the company in waves, some of them warning they no longer believe OpenAI will build superintelligence responsibly. Microsoft, OpenAI’s largest investor, increasingly seems eager for the company to stop building superintelligence and start building a profitable product.

Now, OpenAI is attempting a transition to a more conventional corporate structure, reportedly one where it will be a for-profit public benefit corporation like its rival Anthropic. But nonprofit to for-profit conversions are rare, and misinformation has swirled about what, exactly, “OpenAI becoming a for-profit company” even means.

Elon Musk, who co-founded OpenAI but left after a leadership dispute, paints the for-profit transition as a naked power grab, arguing in a recent lawsuit that Altman and his associates “systematically drained the non-profit of its valuable technology and personnel” in a scheme to get rich off a company that had been founded as a charity. (OpenAI has moved to dismiss Musk’s lawsuit, arguing that it is an “increasingly blusterous campaign to harass OpenAI for his own competitive advantage.”)

While Musk — who has his own reasons to be competitive with OpenAI — is among the more vocal critics, many people seem to be under the impression that the company could just slap on a new “for-profit” label and call it a day.

Can you really do that? Start a charity, with all the advantages of nonprofit status, and then declare one day it’s a for-profit company? No, you can’t, and it’s important to understand that OpenAI isn’t doing that.

Rather, nonprofit lawyers told me what’s almost certainly going on is a complicated and fraught negotiation: the sale of all of the OpenAI nonprofit’s valuable assets to the new for-profit entity, in exchange for the nonprofit continuing to exist and becoming a major investor in the new for-profit entity.

The key question is how much are those assets worth, and can the battered and bruised nonprofit board get a fair deal out of OpenAI (and Microsoft)?

So far, this high-stakes wrangling has taken place almost entirely behind the scenes, and many of the crucial questions have gotten barely any public coverage at all. “I’ve been really kind of baffled at the lack of curiosity about where the value goes that this nonprofit has,” nonprofit law expert Timothy Ogden told me.

Nonprofit law might seem abstruse, which is why most coverage of OpenAI’s transition hasn’t dug into any of the messy details. But those messy details involve tens of billions of dollars, all of which appear to be up for negotiation. The results will dramatically affect how much sway Microsoft has with OpenAI going forward and how much of the company’s value is still tied to its founding mission.

This might seem like something that only matters for OpenAI shareholders, but the company is one of the few that may just have a chance of creating world-changing artificial intelligence. If the public wants a transparent and open process from OpenAI, they have to understand what the law actually allows and who is responsible for following it so we can be sure that OpenAI pursues this transition in a transparent and accountable way.

How OpenAI went from nonprofit to megacorp

In 2015, OpenAI was a nonprofit research organization. It told the IRS in a filing for nonprofit status that its mission was to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

Understanding OpenAI’s expansive reach

OpenAI, the maker of ChatGPT, is one of the most important companies in artificial intelligence and one of the most controversial. I’ve been covering the ins and outs of OpenAI for years; here are some highlights:

Have questions or comments? Email me at kelsey.piper@vox.com.

By 2019, that idealistic nonprofit model was running into some trouble. OpenAI had attracted an incredible staff and published some very impressive research. But it was becoming clear that the lofty goal the company had set itself — building general artificial intelligence, machines that can do everything humans can do — was going to be very expensive. It was naturally hard to raise billions of dollars for an effort that was meant to be nonprofit.

“We realized that we’d reached the limits of our fundraising ability as a pure nonprofit,” co-founder Ilya Sutskever (who has since departed the company) told me at the time.

The company would attempt to split the difference with a hybrid structure: a nonprofit board controlling a for-profit company. An additional twist: Investors in the for-profit company’s returns were capped at 100x their original investments so that, if world-altering superintelligence was achieved as the OpenAI leadership believed it might, the benefits would accrue to all humanity and not just investors. After all, investors needed to be enticed to invest, but if the company truly ended material scarcity and built a God on Earth, as they essentially said they wanted to, the hope was that more than just the investors would come out ahead.

The nonprofit, therefore, was still supposed to be preeminent. “It would be wise to view any investment in OpenAI Global, LLC in the spirit of a donation,” an enormous black-and-pink disclaimer box on OpenAI’s website alerts would-be investors, “with the understanding that it may be difficult to know what role money will play in a post-AGI world. The Company exists to advance OpenAl, Inc.‘s mission of ensuring that safe artificial general intelligence is developed and benefits all of humanity. The Company’s duty to this mission and the principles advanced in the OpenAl, Inc. Charter take precedence over any obligation to generate a profit.”

One might expect that a prominent disclaimer like that would give commercial investors pause. You would be mistaken. OpenAI had Altman, a fantastic fundraiser, at the helm; its flagship product, ChatGPT, was the fastest app to 100 million users. The company was a gamble, but it was the kind of gamble investors can’t wait to get in on.

But that was then, and this is now. In 2023, in an unexpected and disastrously under-explained move, the nonprofit board fired OpenAI CEO Sam Altman. The board had that authority, of course — it was preeminent — but the execution was shockingly clumsy. The timing of the firing looked likely to disrupt an opportunity for employees to sell millions of dollars of stock in the company. The board gave a few examples of underhanded, bizarre, and dishonest behavior by Altman, including being “not consistently candid” with the board. (One board member later expanded the allegations, saying that Altman had lied to board members about private conversations with other board members, but provided nothing as clear as confused and frustrated employees hoped.)

Employees threatened to resign en masse. Microsoft offered to hire them all and reconstitute the company. Sutskever, who was among the board members who’d voted for Altman’s removal, suddenly changed his mind and voted for Altman to stay. That meant the members who had fired Altman were suddenly in the minority. Two of the board members who had opposed Altman resigned, and the once and future CEO returned to the helm.

Many people concluded that it had been a serious mistake to try to run a company worth 11 figures as a nonprofit instead of as the decidedly for-profit company it was clearly operating as, whatever its bylaws might say. So it’s not surprising that ever since the aborted Altman coup, rumors swirled that OpenAI meant to transition to a fully for-profit entity.

In the last few weeks, those rumors have gotten much more concrete. OpenAI’s latest funding round has been reported to include commitments that the nonprofit-to-for-profit transition will get done in the next two years on pain of the more than $6 billion raised being paid back to those investors. Microsoft and OpenAI — both of whom have enormous amounts to gain in the wrangling over who owns the resulting for-profit company — have hired dueling investment banks to negotiate the details.

We are moving into a new era for OpenAI, and it remains to be seen what that will mean for the humble nonprofit that has ended up owning tens of billions of dollars of the company’s assets.

How do you turn a charity into a for-profit?

If OpenAI were really just taking the nonprofit organization’s assets and declaring them “converted” into a for-profit — as if they were playing a game of tag and suddenly decided a tree was “base” — that would absolutely be illegal. The takeaway, though, shouldn’t be that a crime is happening in plain sight, but that something much more complicated is being negotiated. Nonprofit law experts I talked to said that the situation was being widely and comprehensively misunderstood.

Here are the rules. First off, assets accumulated by a nonprofit cannot be used for private benefit. “It’s the job of the board first, and then the regulators and the court, to ensure that the promise that was made to the public to pursue the charitable interest is kept,” UCLA law professor Jill Horwitz told Reuters.

If it looks as though a nonprofit isn’t pursuing its charitable interest, and especially if it appears to be handing some of its board members bargain-bin deals on billion-dollar assets during a transition to for-profit status? That will have the IRS investigating, along with the state’s Attorney General.

But a nonprofit can sell anything it owns. If a nonprofit owns a piece of land, for example, and it wants to sell that land so that it has more money to spend on its mission, it’s all good. If the nonprofit sold the land for well below market value to the director’s nephew, it would be a clear crime, and the IRS or the state’s Attorney General might well investigate. The nonprofit has to sell the land at a fair market price, take the money, and keep using the money for its nonprofit work.

At a much larger scale, that is exactly what is at stake in the OpenAI transition. The nonprofit owns some assets: control over the for-profit company, a lot of AI IP from OpenAI’s proprietary research, and all future returns from the for-profit company once they exceed the 100x cap set up by the capped profit company — which, should the company achieve its goals, could well be limitless. If the new OpenAI wants to extract all of its assets from the nonprofit, it has to pay the full market price. And the nonprofit has to continue to exist and to use the money it has earned in that transfer for its mission of ensuring that AI benefits all of humanity.

There have been a few other cases in corporate legal history of a nonprofit making the transition to a for-profit company, most prominently the credit card company Mastercard, which was founded as a nonprofit collaboration among banks. When that situation happens, the nonprofit’s assets still belong to the nonprofit.

Mastercard, in the course of transitioning to a public company, ended up founding the now-$47 billion Mastercard Foundation, one of the world’s wealthiest private foundations. Far from the for-profit walking away with all the nonprofit’s assets, the for-profit emerges as an independent company and the nonprofit emerges not only still extant but very rich.

OpenAI’s board has indicated that this is exactly what they are doing. “Any potential restructuring would ensure the nonprofit continues to exist and thrive, and receives full value for its current stake in the OpenAI for-profit with an enhanced ability to pursue its mission.” OpenAI board chairman Bret Taylor, a technologist and CEO, told me in a statement. (What counts as “full value”? We’ll come back to that.)

Outside actors, too, expect to be applying oversight to make sure that the nonprofit gets a fair deal. A spokesperson for the California Attorney General’s office told the Information that their office is “committed to protecting charitable assets for their intended purpose.” OpenAI is registered in Delaware, but the company operates primarily in California, and California’s AG is much less deferential to business than Delaware’s.

So, the OpenAI entity will definitely owe the nonprofit mind-boggling amounts of money. Depending who you ask, it could be between $37 billion and $80 billion. The OpenAI for-profit entity does not have that kind of money on hand — don’t forget that OpenAI is projected to lose tens of billions of dollars in the years ahead — so the plans in the works are reportedly for the for-profit to make the nonprofit a major shareholder in the for-profit.

The Information reported last week that “the nonprofit is expected to own at least a 25% stake in the for-profit — which on paper would be worth at least $37 billion.” In other words, rather than buying the assets from the non-profit with cash, OpenAI will trade equity.

That’s a lot of money. But many experts I spoke to thought it was actually much too low.

What’s a fair price for control of a mega company?

Everyone agrees that the OpenAI board is required to negotiate and receive a fair price for everything the OpenAI nonprofit owns that the for-profit is purchasing. But what counts as a fair price? That’s an open question, one that people stand to earn or lose tens of billions of dollars by getting answered in their favor.

But first: What does the OpenAI nonprofit own?

It owns a lot of OpenAI’s IP. How much exactly is highly confidential, but some experts speculate that the $37 billion number is probably a reflection of the easily measured, straightforward assets of the nonprofit, like its IP and business agreements.

Secondly, and most crucially, it owns full control over the OpenAI for-profit. As part of this deal, it is definitely going to give that up, either becoming a minority shareholder or ending up with nonvoting shares entirely. That is, substantially, the whole point of the nonprofit-to-for-profit conversion: After Altman’s ouster, the Wall Street Journal reported, “[I]nvestors began pushing OpenAI to turn into a more typical company.” Investors throwing around billions of dollars don’t want a nonprofit board to be able to fire the CEO because they’re worried he’s too dishonest to make good decisions around powerful new technology. Investors want a normal board that will fire the CEO for normal reasons, like that he’s not maximizing shareholder value.

Control is generally worth a lot more, in for-profit companies, than shares that come without control — often something like 40 percent more. So if the nonprofit is getting a fair deal, it should get some substantive compensation in exchange for giving up control of the company.

Thirdly, investors in OpenAI under its old business model agreed to a “capped profit” model. For most investors, that cap was set at 100x their original investment, so if they invested $1 million, they would get a maximum of $100 million in return. Above that cap, all returns would go to the nonprofit. The logic for this setup was that, under most circumstances, it’s the same as investing in a normal company. Investments don’t usually produce 100x returns, after all, with the exception of early investments in massively successful tech companies like Google or Amazon.

The capped profit setup would be most significant in the unlikely world where OpenAI attained its ambitious goals and built an AI that fundamentally transformed the world economy. (How likely is that? Experts disagree, rather heatedly, but we shouldn’t discount it altogether.) If that does happen, its value will be nearly unfathomably huge. “OpenAI’s value is mostly in the extreme upside,” AI analyst Zvi Mowshowitz wrote in an analysis of the valuation question.

The company might fail entirely; it might muddle along as a midsized company. But it also might be worth trillions of dollars, or more than that, and most investors are investing on the premise it might be worth trillions of dollars. That means the share of profits owned by the nonprofit would also be worth trillions of dollars. “Most future profits still likely flow to the nonprofit,” Mowshowitz concludes. “OpenAI is shooting for the stars. As every VC in this spot knows, it is the extreme upside that matters. That is what the nonprofit is selling. They shouldn’t sell it cheap.”

So what would be an appropriate valuation? $60 billion? $100 billion? Mowshowitz’s analysis is that a fair price would involve the nonprofit still owning a majority of shares in the for-profit, which is to say at least $80 billion. (Presumably these would be nonvoting shares.)

The only people with full information are the ones with access to the company’s confidential balance sheets, and they aren’t talking. OpenAI and Microsoft will be negotiating the answer to the question, but it’s not clear that either of them particularly wants the nonprofit to get a valuation that reflects, for example, the expected value of the profits in excess of the cap because there’s more money for everyone else who wants a piece of the pie if the nonprofit gets less.

There are two forces working toward the nonprofit getting fair compensation: the nonprofit board — whose members are capable people, but also people handpicked by Altman not to get ideas and get in the way of his control of the company — and the law. Experts I spoke with were a bit cynical about the board’s willingness to hold out for a good deal in what is an extremely awkward circumstance for it. “We have kind of already seen what’s going on with the OpenAI board,” Ogden told me.

“I think the common understanding is they’re friendly to Sam Altman, and the ones who were trying to slow things down or protect the nonprofit purpose have left,” Rose Chan Loui, the director of UCLA Law’s nonprofit program, observed to Transformer.

If the board is inclined to go with the flow, the Delaware Attorney General or the IRS could object. These are fundamentally complicated questions about the valuation of a private company, and the law isn’t always good at consistent and principled enforcement in cases like this one. “When you’re talking about numbers like $150 billion,” Horwitz, the UCLA professor, warned, “the law has a way of getting weak.”

Does that mean that Elon Musk’s allegation — that we’re witnessing a bait-and-switch before our eyes, a massive theft of resources that were originally dedicated to the common good — is right after all? I’m not inclined to grant him that much.

Firstly, having spoken to OpenAI leadership and OpenAI employees over the six years I’ve been reporting on the company, I genuinely come away with the impression that the bait-and-switch, to the extent it happened, was completely unintentional.

In 2015, the involved parties really were — including in private emails leaked in Musk’s lawsuits — convinced that a research organization serving the public was the way to achieve their mission. And then over the next few years, as the power of big machine learning models became apparent, they became sincerely convinced they needed to find clever ways to raise money for their research. In 2019, when I spoke with Brockman and Sutskever, they were enthusiastic about their capped profit structure and saw it as a model for how a company could raise money but ensure most of its benefits if it succeeded went to humanity as a whole.

Altman has a habit of being all things to all people, even when that may require being less than truthful. His detractors say he’s “deceptive, manipulative, and worse,” and even his supporters will say he’s “extremely good at becoming powerful,” which VCs might consider more of a compliment than the general public does.

But I don’t think Altman was aiming for this predicament. OpenAI did not inflict its current legal headache on itself out of cunning chicanery, but out of a desire to satisfy a number of different early stakeholders, many of them true believers. It was due chiefly to understandable failures of foresight about how much power corporate governance law would really have once employees had millions riding on the company’s continued fundraising and once investors had billions riding on its ability to make a profit.

Secondly, I think it’s far too soon to call this a bait and switch. The nonprofit’s control of OpenAI was meant to give it the power to stop the company from putting profits before the mission. But it turns out that being on a nonprofit board does not come with enough access to the company, or enough real power, to productively turn OpenAI away from the brink, as we discovered last November.

It seems entirely possible that a massive and highly capitalized nonprofit foundation with the aim of ensuring AI benefits humanity is a better approach than a corporate governance agreement with power on paper and none in practice. If the nonprofit gets massively undervalued in the conversion and shooed away with a quarter of the company when more careful estimates suggest it currently controls a majority of the company’s value, then we can call it a bait and switch.

But that hasn’t happened. The correct attitude is to wait and see, to demand transparency, to hold the board to account for getting the valuation it is legally obligated to pursue, and to pursue OpenAI to the full extent of the law if it ends up convincing the board to give up its extraordinary bequest at bargain-basement prices.

Leave a comment

This website uses cookies to improve your web experience.
Explore
Drag