In a speech delivered at the Munich Security Conference on 15 February 2020, the Director-General of the World Health Organization (WHO) noted that “fake news spreads faster and more easily than this virus, and is just as dangerous”. In fact, we are in the middle of what the WHO calls an infodemic: “too much information including false or misleading information in digital and physical environments during a disease outbreak”.
The spread of misinformation and disinformation, especially online and on social media, has contributed to COVID-19 vaccine hesitancy and weakened use of face masks, putting millions of lives at risk. A recent YouGov poll found that “one in five Americans believes the US government is using the COVID-19 vaccine to microchip the population” while “90% of those who reject vaccination fear possible side effects from the vaccine more than they fear COVID-19 itself”.
To address this, we need to first understand the business model of social media platforms and the monetisation cycle of content that is based on fake news. Reducing monetary incentives for the creation and propagation of such content is the best way to safeguard the truth and protect human lives, while respecting freedom of expression.
On social media platforms everyone can be a source of information, express their views or react to the views of others. The reach of social media platforms has led to an interconnected world where views, ideas and information spread beyond the borders. It also led to a rise of new opportunities for individuals to attract the attention of online users through content creation and derive monetary rewards from online advertising.
Social media platforms operate algorithmic systems empowered by artificial intelligence and machine learning technologies in order to assess the background and profile of users from the information they supply to the platform: post content, likes, reactions and responses to other posts. This allows them to offer personalised services by matching each individual to specific content they are more likely to view and interact with. By keeping users for longer, social media platforms increase their revenue from advertising. Content creators also benefit from personalisation by selling ads with their content. The ads are provided by the social media platform (for example, video creators in YouTube and Facebook can earn monetary rewards from ad breaks on their videos) or by advertising platforms (the main players are Google and Facebook online advertising subsidiaries) before the content is shared through social media to generate traffic and ad revenue. An effective personalised matching between users and content means that as more users drawn to the content, creators extract higher monetary rewards from online ads. Typically, this reward is a fixed minority share of the total platform’s ad revenue from that specific content.
Fake news can be more effective at attracting attention online and therefore more profitable than fact. People with strong/biased views have more incentive to view similar ideological content. And the value they get is reinforced when they meet online and interact with other users with similar views. Incentive to supply stories that match their ideology in turn spurs these users to be more engaged, in a vicious cycle that increases the probability of a ‘snowball’ effect: posts based on false news, with specific, often radical ideas and opinions are becoming very prominent and at a speed that does not allow their validity to be assessed. A recent study concluded that fake news is 70% more likely to be retweeted and spreads 10 to 20 times faster than the truth. So, selling ads containing fake news is more likely to attract a large number of viewers earning content creators thousands of dollars, with platforms also keeping a substantial reward from ads related to fake news.
To address misinformation related to public health issues, we need to develop new measures for policy action like the proposals in Van Alstyne (2021), for example.
One way is to make it more difficult to spread fake news, which has already been tried without success. While some progress has been made as platform start providing truth verification labels on posts and removing fake news (and the accounts associated with them) about COVID-19 vaccines (under the pressure of the US government), in many cases, the logic of the specific measures imposed has been questioned (suspending the accounts of researchers that study fake news, while letting racist abuse fly under the radar) and their effectiveness is doubtful (some of the super spreaders of fake news related to the COVID-19 vaccines have been identified, but are still able to continue their profitable activities on social media). More importantly, these measures were decided by the platforms themselves, despite their monetary gain from fake news. This suggests a conflict of interest.
We certainly need to impose bolder, more effective measures against the spread of fake news without harming freedom of expression. An apparent obstacle is the lack of a working definition of fake news that makes it distinct from radical views, hate speech or opinion. This makes it unlikely to deliver a solution alone.
There is an alternative: to restrict the monetisation of misinformation through advertising. This would tackle the problem at its root: the incentive to supply fake news. The proposal to tax digital advertising by the Nobel laureate in economics, Paul Romer could be a a starting point for looking for a solution. However, we need to significantly modify it so that it can effectively combat misinformation. The design of the tax instrument should follow the basic principles of a Pigouvian tax: tax any market activity that generates negative externalities with the scope to correct potential market failures. Fake news generates negative externalities, especially in the context of public health. The tax should be imposed on the revenue of platforms that comes from ads linked to content labelled as fake news. This digital ad tax will not only reduce the revenue platforms earn from misinformation but also the monetary rewards that fake content creators can extract from their online presence. This is because platforms will be willing to share a smaller share of revenues with them. In this way, we can effectively reduce the supply of stories with strong negative social externalities.
The advantage of this approach is three-fold. First, there is no concern about violating freedom of speech, a fundamental right in our democracy. People are still free to express themselves, but those spreading fake news will no longer be able to make big money out of it. Second, it is not difficult to define fake news and identify content creators who derive substantial benefits from their prominence in social platforms. Especially in the context of COVID-19 vaccines, the start of distribution of misinformation seems to be very concentrated in a few accounts before the “snowball” effect magnifies its impact. Third, it removes the conflict of interest for the platforms. If platforms cannot monetise fake news, they will have more incentives to combat it more effectively.
The implementation of this content-based tax requires careful thought. The identification of the professional fake news content creators should be left to independent expert bodies of auditors and fact checkers. Specific rules should be imposed to make sure that they have the freedom and the right to access all necessary information from social media platforms and receive the full cooperation of those platforms.
Combating fake news effectively requires a multitude of instruments that address the dimensions of the problem. The proposed fake news tax can be part of the policy response addressing the profit motive behind fake news. Understanding the economics behind the emergence of the problem of the infodemic on social media platforms is of vital importance to designing the proper policies to combat it. With COVID-19 being still an imminent threat to everyone, we should build a protective wall against the virus based on the scientific fact and true information.
Petropoulos, G. (2021) ‘The great infodemic: time to consider a fake news tax’, Bruegel Blog, 26 August