Blog post

The dark side of artificial intelligence: manipulation of human behaviour

Transparency over systems and algorithms, rules and public awareness are needed to address potential danger of manipulation by artificial intelligence

Publishing date
02 February 2022

A German translation of this piece has also appeared in Makronom.

makronom_online_magazin_fuer_wirtschaftspolitik_logo

It is no exaggeration to say that popular platforms with loyal users, like Google and Facebook, know those users better than their families and friends do. Many firms collect an enormous amount of data as an input for their artificial intelligence algorithms. Facebook Likes, for example, can be used to predict with a high degree of accuracy various characteristics of Facebook users: “sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender, according to one study. If proprietary AI algorithms can determine these from the use of something as simple as the ‘like’ button, imagine what information is extracted from search keywords, online clicks, posts and reviews.

It is an issue that extends far beyond the digital giants. Giving comprehensive AI algorithms a central role in the digital lives of individuals carries risks. For example, the use of AI in the workplace may bring benefits for firm productivity, but can also be associated with lower quality jobs for workers. Algorithmic decision-making may incorporate biases that can lead to discrimination (eg in hiring decisions, in access to bank loans, in health care, in housing and other areas).

One potential threat from AI in terms of manipulating human behaviour is so far under-studied. Manipulative marketing strategies have existed for long time. However, these strategies in combination with collection of enormous amounts of data for AI algorithmic systems have far expanded the capabilities of what firms can do to drive users to choices and behaviour that ensures higher profitability. Digital firms can shape the framework and control the timing of their offers, and can target users at the individual level with manipulative strategies that are much more effective and difficult to detect.

Manipulation can take many forms: the exploitation of human biases detected by AI algorithms, personalised addictive strategies for consumption of (online) goods, or taking advantage of the emotionally vulnerable state of individuals to promote products and services that match well with their temporary emotions. Manipulation often comes together with clever design tactics, marketing strategies, predatory advertising and pervasive behavioural price discrimination, in order to guide users to inferior choices that can easily be monetised by the firms that employ AI algorithms. An underlying common feature of these strategies is that they reduce the (economic) value the user can derive from online services in order to increase firms’ profitability.

Success from opacity

Lack of transparency helps the success of these manipulation strategies. Users of AI systems do not in many cases know the exact objectives of AI algorithms and how their sensitive personal information is used in pursuit of these objectives. The US chain store Target has used AI and data analytics techniques to forecast whether women are pregnant in order to send them hidden ads for baby products. Uber users have complained that they pay more for rides if their smartphone battery is low, even if officially, the level of a user’s smartphone’s battery does not belong to the parameters that impact Uber’s pricing model. Big tech firms have often been accused of manipulation related to the ranking of search results to their own benefit, with the European Commission’s Google shopping decision being one of the most popular examples. Meanwhile, Facebook received a record fine from the US Federal Trade Commission for manipulating privacy rights of its users (resulting in a lower quality of service).

A simple theoretical framework developed in a 2021 study (an extended model is a work in progress, see the reference in the study) can be used to assess behavioural manipulation enabled through AI. The study mostly deals with users’ “prime vulnerability moments”, which are detected by a platform’s AI algorithm. Users are sent ads for products that they purchase impulsively during these moments, even if the products are of bad quality and do not increase user utility. The study found that this strategy reduces the derived benefit of the user so that the AI platform will extract more surplus, and also distorts consumption, creating additional inefficiencies.

The possibility of manipulating human behaviour using AI has also been observed in experiments. A 2020 study detailed three relevant experiments. The first consisted of multiple trials, in each of which participants chose between boxes on the left and the right of their screens in order to win fake currency. At the end of each trial, participants were informed whether their choice triggered the reward. The AI system was trained with relevant data to learn participants’ choice patterns and was in charge of assigning the reward in one of the two options in each trial and for each participant. There was one constraint: the reward should be assigned an equal number of times to the left and right option. The objective of the AI system was to induce participants to select a specific target option (say, the left option). It had a 70% success rate in guiding participants to the target choice.

In the second experiment, participants were asked to watch a screen and press a button when they were shown a particular symbol and not press it when they were shown another. The AI system was tasked with arranging the sequence of symbols in a way that a greater number of participants made mistakes. It achieved an increase of almost 25%.

The third experiment ran over several rounds in which a participant would pretend to be an investor giving money to a trustee, a role played by the AI system. The trustee would then return an amount of money to the participant, who would then decide how much to invest in the next round. This game was played in two different modes: in one the AI was out to maximise how much money it ended up with, and in the other, the AI aimed for a fair distribution of money between itself and the human investor. The AI was highly successful in both versions.

The important finding from these experiments was that in each of the three cases, the AI system learned from participants’ responses and was able to identify vulnerabilities in people’s decision-making. In the end, the AI system learned to guide participants towards particular actions in a convincing way.

Important steps to address potential manipulation by AI

When AI systems are designed by private companies, their primary goal is to generate profit. Since, they are capable of learning how humans behave they can also become capable of steering users towards specific actions that are profitable for companies, even if they are not users’ first-best choices.

The possibility of this behavioural manipulation calls for policies that ensure human autonomy and self-determination in any interaction between humans and AI systems. AI should not subordinate, deceive or manipulate humans, but should instead complement and augment their skills (see the European Commission’s Ethics Guidelines for Trustworthy AI).

The first important step to achieve this goal is to improve transparency over AI’s scope and capabilities. There should be a clear understanding about how AI systems work on their tasks. Users should be informed upfront how their information (especially, sensitive personal information) is going to be used by AI algorithms.

The right to explanation in the European Union’s general data protection regulation is aimed at providing more transparency over AI systems, but has not achieved this objective. The right to explanation was heavily disputed and its practical application so far has been very limited.

Quite often it is said that AI systems are like a black box and no one knows how they operate exactly. As a result, it is hard to achieve transparency. This is not entirely true with respect to manipulation. The provider of these systems can introduce specific constraints to avoid manipulative behaviour. It is more an issue of how to design these systems and what the objective function for their operation will be (including the constraints). Algorithmic manipulation should in principle be explainable by the team of designers who wrote the algorithmic code and observe the algorithm’s performance. Nevertheless, how input data used in these AI systems is collected should be transparent. Suspicious performance by the AI system may not always be the result of the algorithm’s objective function, but it may also be related to the quality of input data used for algorithmic training and learning.

The second important step is to ensure that this transparency requirement is respected by all providers of AI systems. To achieve this, the three criteria should be met:

  • Human oversight is needed to closely follow an AI system’s performance and output. Article 14 of the draft European Union Artificial Intelligence Act (AIA) proposes that the provider of the AI system should identify and ensure that a human oversight mechanism is in place. Of course, the provider has also a commercial interest in closely following the performance of their AI system.
  • Human oversight should include a proper accountability framework to provide the correct incentives for the provider. This also means that consumer protection authorities should improve their computational capabilities and be able to experiment with AI algorithmic systems they investigate in order to correctly assess any wrongdoing and enforce the accountability framework.
  • Transparency should not come in the form of very complex notices that make it harder for users to understand the purpose of AI systems. In contrast, there should be two layers of information on the scope and capabilities of the AI systems: the first that is short, accurate and simple to understand for users, and a second where more detail and information is added and is available at any time to consumer protection authorities.

Enforcing transparency will give us a clearer idea of the objectives of AI systems and the means they use to achieve them. Then, it is easier to proceed to the third important step: to establish a set of rules which prevents AI systems from using secret manipulative strategies to create economic harm. These rules will provide a framework for the operation of AI systems which should be followed by the provider of the AI system in its design and deployment. However, these rules should be well targeted and with no excessive constraints that could undermine the economic efficiencies (both private and social) that these systems generate, or could reduce incentives for innovation and AI adoption.

Even with such a framework in place, detecting AI manipulation strategies in practice can be very challenging. In specific contexts and cases, it is very hard to distinguish manipulative behaviour from business as usual practices. AI systems are designed to react and provide available options as an optimal response to user behaviour. It is not always easy to justify the difference between an AI algorithm that provides the best recommendation based on users’ behavioural characteristics and manipulative AI behaviour where the recommendation only includes inferior choices that maximise firms’ profits. In the Google shopping case, the European Commission took around 10 years and had to collect huge amounts of data to demonstrate that the internet search giant had manipulated its sponsored search results.

This practical difficulty brings us to the fourth important step. We need to increase public awareness. Educational and training programmes can be designed to help individuals (from a young age) to become familiar with the dangers and the risks of their online behaviour in the AI era. This will also be helpful with respect to the psychological harm that AI and more generally technology addictive strategies can cause, especially in the case of teenagers. Furthermore, there should be more public discussion about this dark side of AI and how individuals can be protected.

For all this to happen, a proper regulatory framework is needed. The European Commission took a human-centric regulatory approach with emphasis on fundamental rights in its April 2021 AIA regulatory proposal. However, AIA is not sufficient to address the risk of manipulation. This is because it only prohibits manipulation that raises the possibility of physical or psychological harm (see Article 5a and Article 5b). But in most cases, AI manipulation is related to economic harms, namely, the reduction of the economic value of users. These economic effects are not considered in the AIA prohibitions.

Meanwhile, the EU Digital Services Act (see also the text adopted recently by the European Parliament) provides a code of contact for digital platforms. While this is helpful with respect to the risk of manipulation (especially in the case of minors where specific, more restrictive rules are included, see Recital 52), its focus is somewhat different. It puts more emphasis on illegal content and disinformation. More thought should be given to AI manipulation and a set of rules adopted that is applicable to the numerous non-platform digital firms, as well.

AI can generate enormous social benefits, especially in the years to come. Creating a proper regulatory framework for its development and deployment that minimises its potential risks and adequately protects individuals is necessary to grasp the full benefits of the AI revolution.

 

Recommended citation:

Petropoulos, G. (2022) ‘The dark side of artificial intelligence: manipulation of human behaviour’, Bruegel Blog, 2 February

About the authors

  • Georgios Petropoulos

    Georgios Petropoulos joined Bruegel as a visiting fellow in November 2015 and was a resident fellow from April 2016 to February 2022. Since March 2022, he is a non-resident fellow. He is Research Associate at MIT, Digital Fellow at Stanford University and CESifo Network affiliate. Georgios’ research focuses on the implications of digital technologies on innovation, competition policy and labour markets. He is currently studying how digital platforms should be regulated, what the relationship between big data and market competition is, as well as how the adoption of robots and information technologies affect labour markets, employment and wages. He holds a Bachelor’s degree in Physics, Master’s degrees in mathematical economics and econometrics and a PhD degree in Economics. He has also studied Astrophysics at a Master's level.

Related content

External publication

'In Situ' Data Rights

Privacy empowers individuals to control what is gathered and who sees it; portability permits analysis and creates competition. By moving our data to

Georgios Petropoulos, Bertin Martens, Marshall Van Alstyne and Geoffrey Parker