Skip to content

Who Gets To Decide OpenAI(c)(3)’s Fate?

Public benefit corporation – News, Research and Analysis – The Conversation  – page 1

Who exactly “owns” nonprofits?  And who gets to make the decisions?  Those are the questions implicitly raised by a proposed amicus brief supporting Musk’s efforts to enjoin OpenAI(c)(3) from selling its assets to a for-profit Delaware Public Benefit Corporation. Encode ai, a prominent, knowledgeable and respected California nonprofit devoted to safe artificial intelligence is seeking permission to submit the brief. OpenAI(c)(3)’s board proposes to sell its assets and use the proceeds to fund further AI research “for the benefit of all humanity.” From the sound of things, there is a a lot of future private benefit for the contemplated public benefit corporation.  More about that in a later post.  Encode argues that OpenAI(c)(3)’s planned sell off is a bad idea.  Encode and Musk might be right but they don’t get to make the decisions.  

OpenAI(c)(3)’s plans are typically described as a “conversion,” a term that suggests impropriety.  A nonprofit can’t ever really “convert.”  Its assets are forever owned by the public. That’s the answer to the first question.  The nonprofit can decide to go in a different charitable direction or maintain course by different means.  That’s it. That’s the bargain the fiduciaries struck when they applied for tax exemption or a state nonprofit charter.

But there is another part to that bargain.  Upon application and grant of a nonprofit charter or tax exemption, fiduciaries and taxpayers agree that fiduciaries shall manage and taxpayers shall subsidize.  This is like the relationship between directors and shareholders in a for profit corporation.  Like shareholders, taxpayers gotta stay in their lane.  Nothing would be accomplished if shareholders get to act like managers too, second guessing every decision and gumming up the works.   They don’t get to veto the managers’ business decisions simply because they disagree.  They can’t conjure up judicial usurpation of managerial decisions simply because they disagree.  It’s called the business judgement rule and it applies to nonprofit corporations, too.  As long as the fiduciaries comply with their duties – good faith, due care, and loyalty – their decisions are presumptively final.  Musk knows that so he has made all sorts of allegations that OpenAI(c)(3) fiduciaries have violated their duties. 

The business judgement rule applies less forcefully when the board embarks upon a major action like merging or selling off the business or a substantial portion. Stockholders normally have a say and  judges are more likely to question boards when the proposed action is as significant as selling off the whole store.  I am not so sure it should apply less forcefully to a major change in a nonprofit corporation.  But even if it does, the fiduciaries’ decision enjoys a presumption of correctness not easily overcome.

Musk and Encode essentially seek to elevate their judgment as “shareholders,” or a court’s judgement as “government” over the manager/fiduciaries.  As to the latter, it would be inconsistent with the whole notion of civil society if the government – by legislation or judicial ruling – retained the power to decide that nonprofit fiduciaries must take one path in pursuit of government defined public benefit rather than another just as legal path in pursuit of grass-roots-defined public benefit.  If that were the case, nonprofits would be mere extensions of government.  But nonprofits exist as alternatives to government definition and pursuit of public benefit.  They are constrained in that alternative effort only along the far-off margins called “public policy.” 

Musk and amicus think the public good would be better pursued without selling OpenAI(c)(3)’s invaluable assets and then using the proceeds to continue the charitable mission in a different way.  Nobody is challenging the notion that developing safe AI is a charitable endeavor.  Musk and amicus just think there is a better way and so they run headlong into the business judgment rule. They think OpenAI(c)(3) should never sell its invaluable assets for any price.  They might be right that there is a better way, but neither they nor judges get to make that decision. Not even if a judge thinks Musk and Encode know better than OpenAI(c)(3)’s board.  The Board must independently and with care, loyalty, and good faith, consider the options. But the Board gets to decide and their decision need not be “right” or “correct.” It must only be appropriately considered. Here is an excerpt from the amicus brief:

OpenAI plans to transfer control of its operations to a Delaware public benefit corporation (PBC). That would do more than shift control from one kind of “inc.” to another, leaving the organization’s mission in place. It would convert an organization bound by law to ensure the safety of advanced AI into one bound by law to “balance” its consideration of any public benefit against “the pecuniary interests of [its] stockholders.” Del. C. § 365(a). OpenAI’s touted fiduciary duty to humanity would evaporate, as Delaware law is clear that the directors of a PBC owe no duty to the public at all. 8 Del. C. § 365(b).

III. Control Over Development and Deployment of AGI Is a Charitable Asset that Should Not Be Sold for Any Price

OpenAI Inc. currently controls all of the other entities in the OpenAI corporate family, including those entities directly engaged in developing safe and beneficial AGI per OpenAI’s charitable mission. That control is itself a charitable asset of the nonprofit.  OpenAI touts the technology it is developing as capable of totally transforming society. It warns potential investors in the existing capped-profit subsidiary controlled by the nonprofit that “[i]t would be wise to view any investment in OpenAI Global, LLC in the spirit of a donation, with the understanding that it may be difficult to know what role money will play in a post-AGI world.” In other words, OpenAI believes its technology may so alter our society that money itself ceases to have value. And the company views this transformation as a one-time occurrence. As Altman notably put it, “AGI [is] going to get built exactly once.”

Taking OpenAI at its word, what price could a for-profit enterprise possibly pay that would adequately compensate the nonprofit for controlling how such a singular transformation of society unfolds? It is priceless. The public interest would be harmed by a safety-focused, mission-constrained nonprofit relinquishing control over something so transformative at any price to a for-profit enterprise with no enforceable commitment to safety.

darryll k. jones