Professor of Ethics and Technology, Hertie School,
Policy officer, European Commission, DG CONNECT,
Senior EU Policy Manager, European Digital SME Alliance,
Video and audio recordings
AI regulation could build trust in AI and thus speed up uptake by EU businesses—thus generating a demand for EU-compliant, EU-made AI. The European Commission, whose proposal for regulating AI will be published shortly before the event, insists that regulating AI would propel the EU into its (digital) industrial future. The event proposes to test this narrative.
Invited guests discussed the extent to which European firms are deterred from adopting or developing AI because of a lack of trust. And if a lack of trust is holding back businesses, might private labels or standards not best address the industrial coordination problem and consumer concerns? The speakers will assess the costs and benefits of regulating AI for the perspective of consumers and businesses but also citizens. Are there cases where the EU’s AI industrial ambition might conflict with its position on fundamental rights and consumer protection?
The European Commission (EC) unveiled its proposal for regulating AI on April 21, 2020. Meeting the following day, the panellists discussed the economic rationale behind the proposal. According to the EC, regulation is necessary to increase trust in AI—so that more people, businesses, and public administrations come to adopt the technology and boost their productivity. This increased uptake would stimulate Europe’s AI business ecosystem. But at what price?
Martin Ulbrich opened the event by presenting the regulatory proposal as good for business. He explained that many companies have been calling for regulation—two-thirds say that lack of trust is slowing AI adoption. Furthermore, by focusing on a narrow set of high-risk AI, the EC minimised the administrative burden falling from the regulation.
Joanna Bryson argued that while regulating AI itself is impossible, regulating the people who use and develop it is essential. We don’t need to trust the AI. We need to trust the humans that build, train, test deploy, and monitor the technology.
Annika Linck presented the view of European digital SMEs. Surveyed in a focused group, 40% of digital SMEs viewed ethical and trustworthy AI as a boon to innovation. But she emphasised that important barriers to wider AI adoption remain in the EU, namely limited access to finance, data, and skilled personnel.
In the Q&A that followed, the panellists largely agreed that AI regulation was an opportunity for European businesses—not a threat to their global competitive advantage. Joanna Bryson welcomed the risk-based approach taken by the Commission; whereby riskier AI is more strictly regulated. Martin Ulbrich predicted that the requirements pertaining to high-risk AI would be relatively uncontroversial (e.g. around training data, human oversight, etc) and that the debate would focus on which AI should be classified as ‘high-risk’. Annika Linck envisaged the possible emergence of an ecosystem of ethical-AI providers in Europe.