Skip to content

Former Board Members Say OpenAI’s Joint Venture with Microsoft is Not Working to Ensure Charitable Purpose

From Nonprofit Joint Ventures: Basics

According to two former OpenAI board members, OpenAI’s joint venture is failing to ensure public benefit takes precedence over private profit.  From an essay in The Economist:

If any company could have successfully governed itself while safely and ethically developing advanced ai systems, it would have been OpenAI. The organisation was originally established as a non-profit with a laudable mission: to ensure that agi, or artificial general intelligence—ai systems that are generally smarter than humans—would benefit “all of humanity”. Later, a for-profit subsidiary was created to raise the necessary capital, but the non-profit stayed in charge. The stated purpose of this unusual structure was to protect the company’s ability to stick to its original mission, and the board’s mandate was to uphold that mission. It was unprecedented, but it seemed worth trying. Unfortunately it didn’t work.

Last November, in an effort to salvage this self-regulatory structure, the OpenAI board dismissed its CEO, Sam Altman. The board’s ability to uphold the company’s mission had become increasingly constrained due to long-standing patterns of behaviour exhibited by Mr Altman, which, among other things, we believe undermined the board’s oversight of key decisions and internal safety protocols. Multiple senior leaders had privately shared grave concerns with the board, saying they believed that Mr Altman cultivated “a toxic culture of lying” and engaged in “behaviour [that] can be characterised as psychological abuse”. According to OpenAI, an internal investigation found that the board had “acted within its broad discretion” to dismiss Mr Altman, but also concluded that his conduct did not “mandate removal”. OpenAI relayed few specifics justifying this conclusion, and it did not make the investigation report available to employees, the press or the public.

The question of whether such behaviour should generally “mandate removal” of a CEO is a discussion for another time. But in OpenAI’s specific case, given the board’s duty to provide independent oversight and protect the company’s public-interest mission, we stand by the board’s action to dismiss Mr Altman. We also feel that developments since he returned to the company—including his reinstatement to the board and the departure of senior safety-focused talent—bode ill for the OpenAI experiment in self-governance.

Our particular story offers the broader lesson that society must not let the roll-out of ai be controlled solely by private tech companies. Certainly, there are numerous genuine efforts in the private sector to guide the development of this technology responsibly, and we applaud those efforts. But even with the best of intentions, without external oversight, this kind of self-regulation will end up unenforceable, especially under the pressure of immense profit incentives. Governments must play an active role.

. . . 

Ultimately, we believe in ai’s potential to boost human productivity and well-being in ways never before seen. But the path to that better future is not without peril. OpenAI was founded as a bold experiment to develop increasingly capable ai while prioritising the public good over profits. Our experience is that even with every advantage, self-governance mechanisms like those employed by OpenAI will not suffice. It is, therefore, essential that the public sector be closely involved in the development of the technology. Now is the time for governmental bodies around the world to assert themselves. Only through a healthy balance of market forces and prudent regulation can we reliably ensure that ai’s evolution truly benefits all of humanity.

. . . 

I think it’s pretty clear by now that OpenAI’s paper compliance with Revenue Ruling 98-15 means nothing in terms of keeping the charitable mission paramount.  It sounds as though Microsoft nevertheless engineered a reverse coup, reinstating Sam Altman and thereby demonstrating that profit making is really in charge.  Bloomberg reports that in response to the “damning criticism,” OpenAI created a Safety Committee.  From OpenAI’s announcement:

Today, the OpenAI Board formed a Safety and Security Committee led by directors Bret Taylor (Chair), Adam D’Angelo, Nicole Seligman, and Sam Altman (CEO). This committee will be responsible for making recommendations to the full Board on critical safety and security decisions for OpenAI projects and operations.  OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI. While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.

A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days. At the conclusion of the 90 days, the Safety and Security Committee will share their recommendations with the full Board. Following the full Board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security. OpenAI technical and policy experts Aleksander Madry (Head of Preparedness), Lilian Weng (Head of Safety Systems), John Schulman (Head of Alignment Science), Matt Knight (Head of Security), and Jakub Pachocki (Chief Scientist) will also be on the committee.

Additionally, OpenAI will retain and consult with other safety, security, and technical experts to support this work, including former cybersecurity officials, Rob Joyce, who advises OpenAI on security, and John Carlin.

darryll k. jones