First glance

The age of competition in generative artificial intelligence has begun

Competition in generative artificial intelligence spurs disruptive, potentially beneficial innovations, but not without costs and risks.

Publishing date
11 May 2023
image of ChatGPT on a computer screen

The release by Open AI of ChatGPT in November 2022 has triggered a wave of new generative artificial intelligence products. While AI is already used to automate tasks, generative AI can produce content, from text and images to computer code, from only a simple prompt. With this creative capability, generative AI has shown its potential to disrupt entrenched markets and create new ones.

This rapid development of generative AI has marked the beginning of a new era of competition, in which firms attempt to drive new breakthroughs in innovation. Large technology companies including Google and Microsoft have deployed generative AI in many of their flagship services, while more than 150 startups (according to one count) have built applications on top of generative AI models called ‘foundation models’.

Generative AI is quickly becoming a valuable tool. Businesses and individuals can increase productivity, accelerate innovation and develop new solutions faster. Benefits can include cost reduction and greater inclusivity in communication. But those benefits will not come without challenges. Students and workers will have to upskill and employers will need to rethink the delegation of tasks to their employees. People will have to learn to treat AI-generated content cautiously and not accept it at face value. These challenges require long-term solutions at a time of rapid innovation.

There are also significant financial costs associated with generative AI models. Large language models (LLMs) must be developed to allow the AI to process large language input and to generate natural language and visual output. LLMs are trained on a large volume of common language (like English or French) data using intense computing power and thus necessitate significant investment. For instance, Microsoft invested a total of $11 billion in Open AI in 2019 and 2023 ($1 billion and $10 billion respectively) to back the development of its models on its own cloud services.

Other firms including Amazon, Anthropic, Baidu, Deepmind, Google, Hugging Face, Meta, OpenAI, the Beijing Academy of AI (BAAI) and Yandex have also developed LLMsResearchers are already developing small language models, which use less trainable parameters than LLMs, to combat the financial and environmental costs of LLMs, while new players will likely emerge in the future. Investment by venture capitalists in generative AI is booming, with an increase of nearly 500% in only two years, from $230 million in 2020 to $1.37 billion in 2022.

While firms and research organisations innovate intensively and compete to provide access to their generative AI models, some experts have already warned that only today’s big tech companies have the capacity to offer the models. This could lead to the reinforcement of their ecosystems and leave the concentration of generative AI models in the hands of just a few large firms, potentially raising competition issues in the future.

Even before the advent of generative AI, competition authorities worldwide focused their efforts on fostering competition in digital markets through reports, market investigations and antitrust cases. Ongoing investigations into the cloud market in France, the United Kingdom and the United States might explore the impact of generative AI on competition, as the generative AI models of Google, Amazon, Microsoft, among others, become available through their cloud services. Generative AI could potentially reinforce their positions and increase market concentration in the cloud market.

Since the current competitive process is effective, competition authorities should only intervene in cases in which market power is misused. Competition authorities should resist the temptation to pick winners and losers under political pressure to promote national champions.

In particular, Europe might attempt to use recent digital competition laws including the Digital Markets Act (DMA), which aims to lower entry barriers and tackle the disadvantages faced by small and medium-sized firms by imposing a list of dos and don’ts on large technology firms.

As generative AI is evolving quickly, intervention by competition authorities would, at best, be premature and counterproductive at worst. Indeed, intervention without evidence of misuse of market power or excessive concentration through acquisitions would distort the competition process. Simultaneously, competition authorities are unlikely to adopt a wait-and-see approach and have already voiced their interest in watching generative AI closely to avoid market concentration. As the UK competition authority did by launching an investigation into foundation models, competition authorities should therefore monitor generative AI development, ensuring markets are open for new entrants and applications.

Policymakers also have a role in ensuring competition in generative AI by ensuring that legal conditions are predictable. Privacy (use of personal data), intellectual property rights (use of protected data) and AI governance (use of high-risk AI) are some of the complex issues policymakers must address urgently. Policymakers should also ensure that generative AI remains inclusive and sustainable, and this will require firms and research institutions to have the human and financial resources to develop more inclusive and sustainable generative AI models that use minority languages and require less computing power.

The potential of generative AI to transform the economy and society is enormous. As the age of generative AI has only begun, everyone will benefit from responsible development in a competitive environment that remains accessible to all.

About the authors

  • Christophe Carugati

    Dr. Christophe Carugati was an affiliate fellow at Bruegel on digital and competition issues until December 2023.

    He holds a Doctor in Law and Economics on Big Data and Competition Law from Paris II University, a Master in Law Economics from the European Master in Law and Economics (EMLE, University of Bologna, Hamburg, and Vienna), a master in Business Law from Aix-Marseille University, and a double Bachelor in Law and Economics from Toulouse School of Economics (TSE). His academic research focuses on the adaption of competition law to the data-driven economy and the regulation of platforms.

    He teaches a competition law seminar at Lille University to master students. Before joining Bruegel, he was a senior policy analyst at the US technology think-tank The Center for Data Innovation, where he worked on digital issues. He also has some experience in practicing competition law in the context of internships in law firms in Paris.

Related content