OpenAI appears to have decided that transferring control away from its nonprofit isn’t worth the hassle after all.
The AI giant on Monday announced that its nonprofit would remain in control of its for-profit division, which is responsible for its money-making chatbot ChatGPT and other products, even after the subsidiary transforms into a public benefit corporation.
“We made the decision for the nonprofit to retain control of OpenAI after hearing from civic leaders and engaging in constructive dialogue with the offices of the Attorney General of Delaware and the Attorney General of California,” the company wrote in a blog post published to its website.
OpenAI started nearly a decade ago as a nonprofit research organization focused on the safe development of AI. In 2019, it added a for-profit arm, governed by its non-profit, to help raise funds for its mission. In September, the company officially announced it planned to move to a for-profit business model.
CEO Sam Altman also published a letter he sent to OpenAI employees discussing the changes, which you can read in full below:
Sam Altman’s letter to employees
OpenAI is not a normal company and never will be.
Our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity.
When we started OpenAI, we did not have a detailed sense for how we were going to accomplish our mission. We started out staring at each other around a kitchen table, wondering what research we should do. Back then, we did not contemplate products, a business model. We could not contemplate the direct benefits of AI being used for medical advice, learning, productivity, and much more, or the needs for hundreds of billions of dollars of compute to train models and serve users.
We did not really know how AGI was going to get built, or used. A lot of people could imagine an oracle that could tell scientists and presidents what to do, and although it could be incredibly dangerous, maybe those few people could be trusted with it.
A lot of people around OpenAI in the early days thought AI should only be in the hands of a few trusted people who could “handle it”.
We now see a way for AGI to directly empower everyone as the most capable tool in human history. If we can do this, we believe people will build incredible things for each other and continue to drive society and quality of life forward. It will of course not be all used for good, but we trust humanity and think the good will outweigh the bad by orders of magnitude.
We are committed to this path of democratic AI. We want to put incredible tools in the hands of everyone. We are amazed and delighted by what they are creating with our tools, and how much they want to use them. We want to open source very capable models. We want to give our users a great deal of freedom in how we let them use our tools within broad boundaries, even if we don’t always share the same moral framework, and to let our users make decisions about the behavior of ChatGPT.
We believe this is the best path forward—AGI should enable all of humanity to benefit each other. We realize some people have very different opinions.
We want to build a brain for the world and make it super easy for people to use for whatever they want (subject to few restrictions; freedom shouldn’t impinge on other people’s freedom, for example).
People are using ChatGPT to increase their productivity as scientists, coders, and much more. People are using ChatGPT to solve serious healthcare challenges they are facing and learn more than ever before. People are using ChatGPT to get advice about how to handle difficult situations. We are very proud to offer a service that is doing so much for so many people; it is the one of most direct fulfillments of our mission we can imagine.
But they want to use it much more; we currently cannot supply nearly as much AI as the world wants and we have to put usage limits on our systems and run them slowly. As the systems become more capable, they will want to use it even more, for even more wonderful things.
We had no idea this was going to be the state of the world when we launched our research lab almost a decade ago. But now that we see this picture, we are thrilled.
It is time for us to evolve our structure. There are three things we want to accomplish:
- We want to be able to operate and get resources in such a way that we can make our services broadly available to all of humanity, which currently requires hundreds of billions of dollars and may eventually require trillions of dollars. We believe this is the best way for us to fulfill our mission and to get people to create massive benefits for each other with these new tools.
- We want our nonprofit to be the largest and most effective nonprofit in history that will be focused on using AI to enable the highest-leverage outcomes for people.
- We want to deliver beneficial AGI. This includes contributing to the shape of safety and alignment; we are proud of our track record with the systems we have launched, the alignment research we have done, processes like red teaming, and transparency into model behavior with innovations like the model spec. AI accelerates, our commitment to safety grows stronger. We want to make sure democratic AI wins over authoritarian AI.
We made the decision for the nonprofit to stay in control after hearing from civic leaders and having discussions with the offices of the Attorneys General of California and Delaware. We look forward to advancing the details of this plan in continued conversation with them, Microsoft, and our newly appointed nonprofit commissioners.
OpenAI was founded as a nonprofit, is today a nonprofit that oversees and controls the for-profit, and going forward will remain a nonprofit that oversees and controls the for-profit. That will not change.
The for-profit LLC under the nonprofit will transition to a Public Benefit Corporation (PBC) with the same mission. PBCs have become the standard for-profit structure for other AGI labs like Anthropic and X.ai, as well as many purpose driven companies like Patagonia. We think it makes sense for us, too.
Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler.
The nonprofit will continue to control the PBC, and will become a big shareholder in the PBC, in an amount supported by independent financial advisors, giving the nonprofit resources to support programs so AI can benefit many different communities, consistent with the mission. And as the PBC grows, the nonprofit’s resources will grow, so it can do even more. We’re excited to soon get recommendations from our nonprofit commission on how we can help make sure AI benefits everyone—not just a few. Their ideas will focus on how our nonprofit work can support a more democratic AI future, and have real impact in areas like health, education, public services, and scientific discovery.
We believe this sets us up to continue to make rapid, safe progress and to put great AI in the hands of everyone. Creating AGI is our brick in the path of human progress; we can’t wait to see what bricks you will add next.
Sam Altman
May 2025