OpenAI, one of the leading generative artificial intelligence (GenAI) companies and the maker of ChatGPT, spiralled into turmoil last weekend when the company’s Board suddenly ousted CEO Sam Altmann and employees revolted against the Board. OpenAI has a $13 billion collaboration agreement with Microsoft whereby Microsoft provides access to financial resources and a huge cloud computing infrastructure that is essential for training GenAI models. In return, Microsoft can integrate ChatGPT in several of its services, including the Bing search engine, Microsoft 365 Office and Teams. The deal creates mutual dependency. If Microsoft closes access to its cloud infrastructure, OpenAI becomes an empty shell. If OpenAI implodes, Microsoft would suffer serious damage to its AI roll-out strategy.
Irrespective of the new developments at OpenAI, where Sam Altman is set to return as CEO under a new board, the ongoing turmoil reveals underlying structural tensions that run across the AI industry.
OpenAI’s ambiguous organisational structure reflects tensions between the existential and commercial sides of AI development. It started in 2015 as a non-profit AI research organisation and later added a for-profit commercial arm to generate revenue and facilitate collaboration with other companies, including Microsoft. In line with the White House Executive Order on AI, the UK Bletchley Declaration and the EU’s AI Act (currently in the making, the non-profit arm focused on containing existential risks and aligning AI models with human values: avoid doing harm to humans.
The for-profit arm dealt with the hard realities: First, GenAI research requires financial resources and infrastructure beyond the reach of a non-profit organisation. Second, the world of GenAI is extremely competitive. Leaders cannot afford to slow down development and switch resources to improve alignment because competitors will catch up quickly. It takes deep financial pockets, like Microsoft’s, to do both at the same time. All these economic factors may eventually eliminate smaller companies from the race and reduce competition between GenAI model producers. The turmoil at OpenAI suggests that highly skilled employees present the ultimate resource scarcity for AI development. Their choices determine who wins.
Another fault line running through the AI industry is open source. The OpenAI founders opted for open-source AI models – hence the name – that are made available to anyone who wants to use and experiment with them. In line with the open software movement, this is meant to be a fast road to innovation and to better testing for potential human harm. But open source also raises the risk of uncontrolled and malevolent use.
Some OpenAI founders argue that, with hindsight, open source proved to be a wrong choice. Nobody could foresee in 2015 the enormous potential and risks that these models would have in 2023, and beyond. Other major AI developers, like Meta, continue with open-source models. The combination of all these trends, including concentration of AI development in a few large firms and fast-growing existential risks, may constitute arguments for more comprehensive regulation of the AI industry. But we may also be vastly overestimating the potential benefits and harms of current AI models. Pre-emptive regulation risks suffocating competition and innovation.