The EU Artificial Intelligence Act: premature or precocious regulation?

Publishing date
11 March 2024
Bertin Martens
Picture of a stack of newspapers
newsletter 0703

The European Union’s Artificial Intelligence Act is essentially a product safety regulation. Risks can be assessed reasonably well for older-generation single-purpose artificial intelligence (AI) models. It becomes more problematic for the latest Generative AI models, like ChatGPT, that can be moulded for an almost infinite range of purposes. The AI Act tries to work around this with a general obligation to avoid harm for human users. General purpose AI (GPAI) models are a source of concern, especially when they become very large and pose systemic risks. GPAI developers must build in ‘guardrails’ that avoid harmful responses. However, there are many ways to circumvent these guardrails.

There is also vigorous competition between AI start-ups and big tech firms and no sign of emerging monopolistic gatekeepers yet. However, smaller firms need access to large AI cloud computing infrastructure where big tech firms dominate. Big tech firms can put AI models to work in their established business services to generate revenue. That leads to close collaboration agreements between start-ups and big firms that may tend towards integration. AI startups need to set up their own business services, often with riskier open-source AI models that become platforms where deployers can plug in their own applications. New AI-driven ecosystems are emerging that combine AI models with existing online services. The borderline between developers, deployers and users becomes a bit foggy.

The Act as it stands today is just the start of a long regulatory process. It delegates responsibility to the European Commission and its newly created AI Office to draft implementation acts and guidelines to address these challenges. These will drive enforcement of the Act and determine to what extent it will be a precocious instrument to stimulate trustworthy AI innovation or a premature innovation-smothering regulation.

Read more about the European Union AI Act in Bertin Martens' Analysis 'The European Union AI Act: premature or precocious regulation?'

The Why Axis is a weekly newsletter distributed by Bruegel, bringing you the latest research on European economic policy. 

Sign up for the newsletter. 

About the authors

  • Bertin Martens

    Bertin Martens is a Senior fellow at Bruegel. He has been working on digital economy issues, including e-commerce, geo-blocking, digital copyright and media, online platforms and data markets and regulation, as senior economist at the Joint Research Centre (Seville) of the European Commission, for more than a decade until April 2022.  Prior to that, he was deputy chief economist for trade policy at the European Commission, and held various other assignments in the international economic policy domain.  He is currently a non-resident research fellow at the Tilburg Law & Economics Centre (TILEC) at Tilburg University (Netherlands).  

    His current research interests focus on economic and regulatory issues in digital data markets and online platforms, the impact of digital technology on institutions in society and, more broadly, the long-term evolution of knowledge accumulation and transmission systems in human societies.  Institutions are tools to organise information flows.  When digital technologies change information costs and distribution channels, institutional and organisational borderlines will shift.  

    He holds a PhD in economics from the Free University of Brussels.

    Disclosure of external interests  

    Declaration of interests 2023

Related content