The recent chaos at OpenAI not only revealed a deep-seated schism within the company's unique organizational structure it cast a glaring light on the critical industry-wide division between the push for safety regulations and responsible AI stewardship and ambitions for rapid innovation and AI commercial distribution. Now that the dust has settled with Sam Altman back at OpenAI as CEO and a new initial board formed, how will the company move forward navigating this precarious terrain? Can OpenAI successfully position itself as a leading platform for advancing thoughtful AI research while reconciling Silicon Valley techno-optimism with its pursuits in commercial innovation?
The non-profit arm of Open AI was founded in 2015 with Musk, Thiel, and others committing an initial investment of $1B, dedicated to advancing AI to benefit “humanity as a whole.” The company's original structure was developed as a think tank focused on research and responsible stewardship. As stated in the 2015 charter OpenAI’s “primary fiduciary duty is to humanity.” In 2019, the for-profit subsidiary was launched, with a capped profit structure and a $1B investment from Microsoft driven by ambitions for rapid, widespread innovation. The tension within this unusual corporate structure of OpenAI and the polarity of aims between the two branches was further exacerbated by the fact that the non-profit board retained control over both entities.
The debate between the need for safety and consumer protections versus corporate ambitions for fast, efficient, and rapid commercialization and distribution has been at the center of the tech industry for nearly three decades. Nonetheless, the importance of regulations for AI cannot be underestimated. AI safety concerns have far-reaching, dangerous implications that go well beyond what we’ve seen in previous tech booms that include misinformation through, for example, deepfakes, algorithmic bias, and privacy violations that can lead to socioeconomic inequality and market volatility. Additionally, there are concerns about weapon automatization and the development of uncontrollable self-aware AI.
In early November, the AI Safety Summit held in the United Kingdom, hosted by Prime Minister Rishi Sunak, was focused on such concerns. While two dozen nations were in agreement on the problem at hand, as reported by Nature, there was hesitation on the part of many governments to address the topic of regulation head-on.
The Biden administration released its executive order on October 30, signally its commitment to AI regulation. However, enforcement is critical here, and it's still unclear how that will be handled. According to Matt Calkins, Founder and CEO of Appian, “The way to keep AI safe is not by limiting […] who can use it but rather by limiting what data can be trained to create these AI models. If you don’t want it to be used to make a dangerous weapon, then don’t train it on weapons data. If you don’t want it to make a new virus, then don’t allow it to be trained on viral data.” Calkins firmly believes in the need for regulation of the data on which AI is trained rather than the current focus vs who has access to use AI.
While the level of international participation in AI safety discussions and efforts may be heartening to some, it may be equally disheartening to others. Regardless of your stance, the critical pressure point is industry-wide ambitions for widespread AI commercialization, distribution, and integration. And competition in the race to develop AI products is fierce. In 2022, global AI investments reached $91.9B across healthcare, fintech, data management, cybersecurity, advertising, and retail, alongside many other industries.
To date, OpenAI has operated, in many ways, as a democratized platform wherein other companies can use their technology in value creation. This is evident across their products, including ChatGPT, DALL-E, and Whisper. After its recent turmoil, OpenAI is now equipped with an emboldened new board, including Microsoft as a non-voting member, and able to re-envision its corporate structure. Will OpenAI embrace a new approach to how it balances safety and innovation? Only time will tell.