OpenAI is a nonprofit-corporate hybrid: A management expert explains how this model works − and how it fueled the tumult around CEO Sam Altman's short-lived ouster
- Written by Alnoor Ebrahim, Professor of Management, Tufts University
The board of OpenAI, creator of the popular ChatGPT and DALL-E artificial intelligence tools, fired Sam Altman, its chief executive officer[1], in late November 2023.
Chaos ensued as investors and employees rebelled. By the time the mayhem had subsided five days later, Altman had returned triumphantly[2] to the OpenAI fold amid staff euphoria, and three of the board members[3] who had sought his ouster had resigned.
The structure of the board – a nonprofit board of directors overseeing a for-profit subsidiary – seems to have played a role in the drama.
As a management scholar who researches organizational accountability, governance and performance[4], I’d like to explain how this hybrid approach is supposed to work.
Hybrid governance
Altman co-founded OpenAI[5] in 2015 as a tax-exempt nonprofit with a mission[6] “to build artificial general intelligence (AGI) that is safe and benefits all of humanity.” To raise more capital than it could amass through charitable donations, OpenAI later established a holding company that enables it to take money from investors for a for-profit subsidiary it created.
OpenAI’s leaders chose this “hybrid governance[7]” structure to enable it to stay true to its social mission while harnessing the power of markets to grow its operations and revenues. Merging profit with purpose has enabled OpenAI to raise billions from investors seeking financial returns while balancing “commerciality with safety and sustainability[8], rather than focusing on pure profit-maximization,” according to an explanation on its website.
Major investors thus have a large stake in the success of its operations. That’s especially true for Microsoft, which owns 49% of OpenAI’s for-profit subsidiary after investing US$13 billion in the company[9]. But those investors aren’t entitled to board seats as they would be in typical corporations.
And the profits OpenAI returns to its investors are capped at approximately 100 times[10] what the initial investors put in. This structure calls for it to revert to a nonprofit[11] once that point is reached. At least in principle, this design was intended to prevent the company from veering from its purpose of benefiting humanity safely and to avoid compromising its mission by recklessly pursuing profits.
Other hybrid governance models
There are more hybrid governance models[12] than you might think.
For example, the Philadelphia Inquirer[13], a for-profit newspaper, is owned by the Lenfest Institute[14], a nonprofit. The structure allows the newspaper to attract investments without compromising on its purpose – journalism serving the needs of its local communities.
Patagonia[15], a designer and purveyor of outdoor clothing and gear, is another prominent example. Its founder, Yvon Chouinard, and his heirs have permanently transferred their ownership to a nonprofit trust[16]. All of Patagonia’s profits now fund environmental causes.
Anthropic, one of OpenAI’s competitors, also has a hybrid governance structure, but it’s set up differently than OpenAI’s. It has two distinct governing bodies: a corporate board and what it calls a long-term benefit trust[17]. Because Anthropic is a public benefit corporation[18], its corporate board may consider the interests of other stakeholders besides its owners – including the general public.
And BRAC[19], an international development organization founded in Bangladesh in 1972 that’s among the world’s largest NGOs[20], controls several for-profit social enterprises that benefit the poor. BRAC’s model resembles OpenAI’s in that a nonprofit owns for-profit businesses.
Origin of the board’s clash with Altman
The primary responsibility of the nonprofit board is to ensure that the mission of the organization it oversees is upheld. In hybrid governance models, the board has to ensure that market pressures to make money for investors and shareholders don’t override the organization’s mission – a risk known as mission drift[21].
Nonprofit boards have three primary duties[22]: the duty of obedience, which obliges them to act in the interest of the organization’s mission; the duty of care, which requires them to exercise due diligence in making decisions; and the duty of loyalty, which commits them to avoiding or addressing conflicts of interest.
It appears that OpenAI’s board sought to exercise the duty of obedience[23] when it decided to sack Altman. The official reason given was that he was “not consistently candid in his communications[24]” with its board. Additional rationales raised anonymously by people identified as “Concerned Former OpenAI Employees[25]” have not been verified.
In addition, board member Helen Toner[26], who left the board amid this upheaval, co-authored a research paper[27] just a month before the failed effort to depose Altman. Toner and her co-authors praised Anthropic’s precautions and criticized OpenAI’s “frantic corner-cutting” around the release of its popular ChatGPT chatbot.
Mission v. money
This wasn’t the first attempt to oust Altman on the grounds that he was straying from mission.
In 2021, the organization’s head of AI safety, Dario Amodei, unsuccessfully tried to persuade the board to oust Altman because of safety concerns[28], just after Microsoft invested $1 billion in the company. Amodei later left OpenAI, along with about a dozen other researchers, and founded Anthropic[29].
The seesaw between mission and money is perhaps best embodied by Ilya Sutskever, an OpenAI co-founder, its chief scientist and one of the three board members who were forced out or stepped down[30].
Sutskever first defended the decision[31] to oust Altman on the grounds that it was necessary for protecting the mission of making AI beneficial to humanity. But he later changed his mind[32], tweeting: “I deeply regret my participation in the board’s actions.”
He eventually signed the employee letter calling for Altman’s reinstatement and remains the company’s chief scientist.
AI risks
An equally important question is whether the board exercised its duty of care.
I believe it’s reasonable for OpenAI’s board to question whether the company released ChatGPT[34] with sufficient guardrails in November 2022. Since then, large language models have wreaked havoc in many industries.
I’ve seen this firsthand as a professor.
It has become nearly impossible in many cases to tell whether students are cheating on assignments by using AI. Admittedly, this risk pales in comparison to AI’s ability to do even worse things, such as by helping design pathogens of pandemic potential[35] or create disinformation and deepfakes[36] that undermine social trust and endanger democracy.
On the flip side, AI has the potential to provide huge benefits to humanity[37], such as speeding the development of lifesaving vaccines.
But the potential risks are catastrophic. And once this powerful technology is released, there is no known “off switch[38].”
Conflicts of interest
The third duty, loyalty, depends on whether board members had any conflicts of interest.
Most obviously, did they stand to make money from OpenAI’s products, such that they might compromise its mission in the expectation of financial gain? Typically the members of a nonprofit board are unpaid[39], and those who aren’t working for the organization have no financial stake in it. CEOs report to their boards, which have the authority to hire and fire them.
Until OpenAI’s recent shake-up, however, three of its six board members were paid executives[40] – the CEO, the chief scientist and the president of its profit-making arm.
I’m not surprised that while the three independent board members all voted to oust Altman, all of the paid executives ultimately backed him. Earning your paycheck from an entity you are supposed to oversee is considered a conflict of interest[41] in the nonprofit world.
I also believe that even if OpenAI’s reconfigured board manages to fulfill the mission of serving the needs of society, rather than maximizing its profits, it would not be enough.
The tech industry is dominated by the likes of Microsoft, Meta and Alphabet – massive for-profit corporations, not mission-driven nonprofits. Given the stakes, I think regulation with teeth[42] is required – leaving governance in the hands of AI’s creators will not solve the problem.
References
- ^ fired Sam Altman, its chief executive officer (theconversation.com)
- ^ Altman had returned triumphantly (apnews.com)
- ^ three of the board members (www.cnbc.com)
- ^ organizational accountability, governance and performance (scholar.google.com)
- ^ Altman co-founded OpenAI (openai.com)
- ^ tax-exempt nonprofit with a mission (openai.com)
- ^ hybrid governance (openai.com)
- ^ commerciality with safety and sustainability (openai.com)
- ^ investing US$13 billion in the company (www.cnbc.com)
- ^ 100 times (techcrunch.com)
- ^ revert to a nonprofit (www.wired.com)
- ^ hybrid governance models (doi.org)
- ^ Philadelphia Inquirer (www.inquirer.com)
- ^ Lenfest Institute (www.lenfestinstitute.org)
- ^ Patagonia (theconversation.com)
- ^ transferred their ownership to a nonprofit trust (www.patagonia.com)
- ^ long-term benefit trust (www.anthropic.com)
- ^ public benefit corporation (www.law.cornell.edu)
- ^ BRAC (www.brac.net)
- ^ among the world’s largest NGOs (www.humanrightscareers.com)
- ^ risk known as mission drift (doi.org)
- ^ Nonprofit boards have three primary duties (theconversation.com)
- ^ sought to exercise the duty of obedience (www.cnn.com)
- ^ not consistently candid in his communications (apnews.com)
- ^ Concerned Former OpenAI Employees (gist.github.com)
- ^ board member Helen Toner (www.theguardian.com)
- ^ co-authored a research paper (cset.georgetown.edu)
- ^ oust Altman because of safety concerns (www.ft.com)
- ^ founded Anthropic (www.cnbc.com)
- ^ one of the three board members who were forced out or stepped down (www.cnbc.com)
- ^ Sutskever first defended the decision (www.wsj.com)
- ^ changed his mind (www.washingtonpost.com)
- ^ Kimberly White/Getty Images for TechCrunch (www.gettyimages.com)
- ^ the company released ChatGPT (www.axios.com)
- ^ pathogens of pandemic potential (www.vox.com)
- ^ disinformation and deepfakes (ai100.stanford.edu)
- ^ benefits to humanity (www.forbes.com)
- ^ off switch (www.nytimes.com)
- ^ nonprofit board are unpaid (www.501c3.org)
- ^ three of its six board members were paid executives (openai.com)
- ^ conflict of interest (cullinanelaw.com)
- ^ regulation with teeth (theconversation.com)
Authors: Alnoor Ebrahim, Professor of Management, Tufts University