First glance

The turmoil at OpenAI reveals underlying structural tensions in the AI industry

The competitive nature of the industry means highly skilled employees present the ultimate resource scarcity for AI development.

Publishing date
22 November 2023
Authors
Bertin Martens
OpenAI logo is displayed on a mobile phone screen

OpenAI, one of the leading generative artificial intelligence (GenAI) companies and the maker of ChatGPT, spiralled into turmoil last weekend when the company’s Board suddenly ousted CEO Sam Altmann and employees revolted against the Board. OpenAI has a $13 billion collaboration agreement with Microsoft whereby Microsoft provides access to financial resources and a huge cloud computing infrastructure that is essential for training GenAI models. In return, Microsoft can integrate ChatGPT in several of its services, including the Bing search engine, Microsoft 365 Office and Teams. The deal creates mutual dependency. If Microsoft closes access to its cloud infrastructure, OpenAI becomes an empty shell. If OpenAI implodes, Microsoft would suffer serious damage to its AI roll-out strategy.   

Irrespective of the new developments at OpenAI, where Sam Altman is set to return as CEO under a new board, the ongoing turmoil reveals underlying structural tensions that run across the AI industry. 

OpenAI’s ambiguous organisational structure reflects tensions between the existential and commercial sides of AI development. It started in 2015 as a non-profit AI research organisation and later added a for-profit commercial arm to generate revenue and facilitate collaboration with other companies, including Microsoft. In line with the White House Executive Order on AI, the UK Bletchley Declaration and the EU’s AI Act (currently in the making), the non-profit arm focused on containing existential risks and aligning AI models with human values: avoid doing harm to humans. 

The for-profit arm dealt with the hard realities: First, GenAI research requires financial resources and infrastructure beyond the reach of a non-profit organisation. Second, the world of GenAI is extremely competitive. Leaders cannot afford to slow down development and switch resources to improve alignment because competitors will catch up quickly. It takes deep financial pockets, like Microsoft’s, to do both at the same time. All these economic factors may eventually eliminate smaller companies from the race and reduce competition between GenAI model producers. The turmoil at OpenAI suggests that highly skilled employees present the ultimate resource scarcity for AI development. Their choices determine who wins.            

Another fault line running through the AI industry is open source. The OpenAI founders opted for open-source AI models – hence the name – that are made available to anyone who wants to use and experiment with them. In line with the open software movement, this is meant to be a fast road to innovation and to better testing for potential human harm. But open source also raises the risk of uncontrolled and malevolent use. 

Some OpenAI founders argue that, with hindsight, open source proved to be a wrong choice. Nobody could foresee in 2015 the enormous potential and risks that these models would have in 2023, and beyond. Other major AI developers, like Meta, continue with open-source models. The combination of all these trends, including concentration of AI development in a few large firms and fast-growing existential risks, may constitute arguments for more comprehensive regulation of the AI industry.  But we may also be vastly overestimating the potential benefits and harms of current AI models.  Pre-emptive regulation risks suffocating competition and innovation.   

About the authors

  • Bertin Martens

    Bertin Martens is a Senior fellow at Bruegel. He has been working on digital economy issues, including e-commerce, geo-blocking, digital copyright and media, online platforms and data markets and regulation, as senior economist at the Joint Research Centre (Seville) of the European Commission, for more than a decade until April 2022.  Prior to that, he was deputy chief economist for trade policy at the European Commission, and held various other assignments in the international economic policy domain.  He is currently a non-resident research fellow at the Tilburg Law & Economics Centre (TILEC) at Tilburg University (Netherlands).  

    His current research interests focus on economic and regulatory issues in digital data markets and online platforms, the impact of digital technology on institutions in society and, more broadly, the long-term evolution of knowledge accumulation and transmission systems in human societies.  Institutions are tools to organise information flows.  When digital technologies change information costs and distribution channels, institutional and organisational borderlines will shift.  

    He holds a PhD in economics from the Free University of Brussels.

    Disclosure of external interests  

    Declaration of interests 2023

Related content