Skip to content

Why We Need Tax Exempt OpenAI to Succeed

Will AI take over the world and enslave us? — Steemit

A feature in last Monday’s WSJ explains why we need OpenAI to succeed.  And why we need OpenAI’s joint venture with Microsoft to succeed.  As to the latter, the author points out that AI research is exceedingly expensive.  And ESG style AI research is even more expensive because profit-making is subordinated to moral values and justice.  So OpenAI needs more capital than is available from government or philanthropists.  The author asserts, too, that we need civil society to moderate the amorality of profit-seekers so that machines programmed by only a few, or by themselves inevitably, don’t conquer the world and enslave us all.  He thinks it is only civil society that can make sure AI is developed and deployed consistently with moral values and justice.  That’s what OpenAI is trying to do, you know.  Keep the machines from taking over the world.  We need them to be successful.  Right now, the OpenAI insiders — guardians of morality and justice at first — are just too close to the money.  Their skyrocketing private profits, as measured by the appreciating stock they themselves hold in OpenAI Operating Company render whatever faith we might have that they will always pursue value and justice Pollyannaish, I expect.   But we need OpenAI to succeed.  The fiduciaries oughta put some kinda process or a wall or something between their stock ownership, their role as insiders of tax exempt OpenAI, and OpenAI’s alleged majority control of the OpenAI OpCo required by Rev. Rul. 98-15.  A blind trust to make us feel better if nothing else.  Otherwise, its just a matter of time.  Here is some of the article:

Today’s large language models, the computer programs that form the basis of artificial intelligence, are impressive human achievements. Behind their remarkable language capabilities and impressive breadth of knowledge lie extensive swaths of data, capital and time. Many take more than $100 million to develop and require months of testing and refinement by humans and machines. They are refined, up to millions of times, by iterative processes that evaluate how close the systems come to the “correct answer” to questions and improve the model with each attempt.

What’s still difficult is to encode human values. That currently requires an extra step known as Reinforcement Learning from Human Feedback, in which programmers use their own responses to train the model to be helpful and accurate. Meanwhile, so-called “red teams” provoke the program in order to uncover any possible harmful outputs. This combination of human adjustments and guardrails is designed to ensure alignment of AI with human values and overall safety. 

But as models become more sophisticated, this approach may prove insufficient. Some models are beginning to exhibit polymathic behavior: They appear to know more than just what is in their training data and can link concepts across fields, languages, and geographies. At some point they will be able to, for example, suggest recipes for novel cyberattacks or biological attacks—all based on publicly available knowledge.

There’s little consensus around how we can rein in these risks. The press has reported a variety of explanations for the tensions at OpenAI in November, including that behind the then-board’s decision to fire CEO Sam Altman was a conflict between commercial incentives and the safety concerns core to the board’s nonprofit mission. Potential commercial offerings like the ability to fine-tune the company’s ChatGPT program for different customers and applications could be very profitable, but such customization could also undermine some of OpenAI’s basic safeguards in ChatGPT. Tensions like that around AI risk will only become more prominent as models get smarter and more capable. We need to adopt new approaches to AI safety that track the complexity and innovation speed of the core models themselves.

I recently attended a gathering in Palo Alto organized by the Rand Corp. and the Carnegie Endowment for International Peace, where key technical leaders in AI converged on an idea: The best way to solve these problems is to create a new set of testing companies that will be incentivized to out-innovate each other—in short, a robust economy of testing. To check the most powerful AI systems, their testers will also themselves have to be powerful AI systems, precisely trained and refined to excel at the single task of identifying safety concerns and problem areas in the world’s most advanced models. To be trustworthy and yet agile, these testing companies should be checked and certified by government regulators but developed and funded in the private market, with possible support by philanthropy organizations. 

One way this can unfold is for government regulators to require AI models exceeding a certain level of capability to be evaluated by government-certified private testing companies (from startups to university labs to nonprofit research organizations), with model builders paying for this testing and certification so as to meet safety requirements. Testing companies would compete for dollars and talent, aiming to scale their capabilities at the same breakneck speed as the models they’re checking. As AI models proliferate, growing demand for testing would create a big enough market. Testing companies could specialize in certifying submitted models across different safety regimes, such as the ability to self-proliferate, create new bio or cyber weapons, or manipulate or deceive their human creators. Such a competitive market for testing innovation would have similar dynamics to what we currently have for the creation of new models, where we’ve seen explosive advances in short timescales. Without such a market and the competitive incentives it brings, governments, research labs and volunteers will be left to guarantee the safety of the most powerful systems ever created by humans, using tools that lag generations behind the frontier of AI research.

darryll k. jones