First glance

Tech firms’ promise to fight election fakes is a good start, but only a start

Digital firms must work with urgency to tackle deceptive content designed to mislead voters

Publishing date
20 February 2024
J. Scott Marcus
a picture of a table with numerous ballot sheets on it, and people counting votes

Tackling the risk of technology-driven election manipulation is arguably more pressing than ever in 2024, with its procession of multiple elections in major countries.

This is particularly the case at a time when bad actors arguably have greater incentives than ever to use technology to manipulate elections. A ‘perfect storm’ could emerge from the increasingly tense geopolitical situation, the return of kinetic war, the loss of public confidence in peaceful multilateral solutions and the amazingly rapid improvements in artificial intelligence technology (and especially the ability to create convincing deep fake images, audio and video). The risks and the stakes are very high.

With this as a backdrop, at the Munich Security Conference on 16 February, twenty leading tech firms signed a new Tech Accord to Combat Deceptive Use of AI in 2024 Elections. The Accord seeks to combat deceptive election content, which the Accord defines as “convincing AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote.”

The twenty firms are Adobe, Amazon, Anthropic, ARM, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta (the parent of Facebook), Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, TruePic and X (formerly Twitter) – in other words, nearly all of the firms that are important for providing tools that could be used to create deep fakes and to disseminate them to the public.

The firms committed to implement on a voluntary basis a range of measures, including assessing the risks to elections posed by AI technologies, detecting fake election-related content, continuing discussions with stakeholders and supporting awareness-raising initiatives.

The Accord has already been criticised on the basis that firms will not do enough as a result of purely voluntary commitments, that there is no enforcement mechanism and that the commitments are too vague and nebulous. Such criticisms are valid, but also beside the point.

This is not the time for mandatory global obligations to combat deceptive election content. No one is currently in a position to lay out a complete and comprehensive solution to the problem of deceptive election content. Even if a perfect solution were known, there would not be time to implement it before the elections this year. Firms can, however, take many helpful steps. Instead of a mandatory framework, what was called for, in line with what Otto von Bismarck once said of politics in general, was an accord that reflected “the art of the possible”. Microsoft chairman Brad Smith has portrayed it as a start: “While many more steps will be needed,” he said, the Accord “marks the launch of a genuinely global initiative to take immediate practical steps and generate more and broader momentum”.

The reality, moreover, is that some of the signatory firms are much more able to take concrete steps than others. Most already have human resources invested in dealing with false or misleading content, and substantial bases of tools and research. These firms will intensify what they are already doing – they are already making statements about how they propose to implement the Accord, including the use of the C2PA standard to identify the source of content in a way that is protected with a cryptographic hash and signature. The Accord also represents a commitment for firms that otherwise compete with one another to cooperate in fighting deceptive election content.

Firms that have not already invested in staff resources and research to address deep fakes will be hard-pressed to do much new technical implementation in the limited time available. Nevertheless, they should at minimum be able to adhere to the portions of the Accord that call for exchanging best practice, transparency about their practices and engaging with stakeholders and the public.

The Accord is a good start, but only a start. A statement of principles alone will not solve the election deep-fake problem. To mitigate the risks, a great deal of concerted and coordinated work is called for, by a large number of parties, in a short period of time.

About the authors

  • J. Scott Marcus

    J. Scott Marcus is a Senior Fellow at Bruegel, a Brussels-based economics think tank, and also works as an independent consultant dealing with policy and regulatory policy regarding electronic communications. His work is interdisciplinary and entails economics, political science / public administration, policy analysis, and engineering.

    From 2005 to 2015, he served as a Director for WIK-Consult GmbH (the consulting arm of the WIK, a German research institute in regulatory economics for network industries). From 2001 to 2005, he served as Senior Advisor for Internet Technology for the United States Federal Communications Commission (FCC), as a peer to the Chief Economist and Chief Technologist. In 2004, the FCC seconded Mr. Marcus to the European Commission (to what was then DG INFSO) under a grant from the German Marshall Fund of the United States. Prior to working for the FCC, he was the Chief Technology Officer (CTO) of Genuity, Inc. (GTE Internetworking), one of the world's largest backbone internet service providers.

    Mr. Marcus is a member of the Scientific Committee of the Communications and Media program at the Florence School of Regulation (FSR), a unit of the European University Institute (EUI). He is also a Fellow of GLOCOM (the Center for Global Communications, a research institute of the International University of Japan). He is a Senior Member of the IEEE; has served as co-editor for public policy and regulation for IEEE Communications Magazine; served on the Meetings and Conference Board of the IEEE Communications Society from 2001 through 2005; and was Vice Chair and then Acting Chair of IEEE CNOM. He served on the board of the American Registry of Internet Numbers (ARIN) from 2000 to 2002.

    Marcus is the author of numerous papers, a book on data network design. He either led or served as first author for numerous studies for the European Parliament, the European Commission, and national governments and regulatory authorities around the world.

    Marcus holds a B.A. in Political Science (Public Administration) from the City College of New York (CCNY), and an M.S. from the School of Engineering, Columbia University.

Related content