Analysis

A high-level view of the impact of AI on the workforce

A transatlantic study makes the right recommendations on artificial intelligence in the workplace, but work is needed to turn these into practice.

Publishing date
14 March 2023
Authors
J. Scott Marcus
Artificial Intelligence

The EU-US Trade and Technology Council (TTC), a forum established in 2021[1], has been seeking to achieve a positive reset of the EU-US relationship, in terms of both technology and trade policy, after the traumatic Trump years. The initial composition consisted of ten distinct working groups, none of which had more than a passing relationship with labour issues. More recently, however, the TTC has begun to address labour policy, and its relationship to digitalisation, automation, artificial intelligence and more broadly the evolution of technology. There have been two main developments: the creation of a Trade and Labor Dialogue[2], which met for the first time on 20 September 2022, and the production of a joint study on The Impact of Artificial Intelligence on the Future of Workforces in the European Union and the United States of America (TTC, 2022), which was published on 5 December 2022.

The report does not represent a deep dive into public policy in either the EU or the US on AI as it relates to the workforce. It makes only a passing mention of the proposed EU Artificial Intelligence Act[3], no mention at all of the algorithmic management sections of the proposed EU Platform Workers Directive[4], and only a brief mention of the US initiative to create an AI ‘Bill of Rights’[5]. Nonetheless, the report is important in that it represents a possible transatlantic convergence of views at governmental level. It helps to establish a constructive and cooperative tone on a constellation of challenging issues. For these reasons, a close look at the report is warranted.

The stated goal of the report – produced by the European Commission and the US Council of Economic Advisers – was “to synthesize the perspectives of the US and European Union and academic work from both countries with a focus on implications relevant to policymakers”. It provides a general background on the degree of take-up of AI and machine learning (ML) in the EU and the US, and reflects at length on both the potential benefits and potential threats of AI together with ML. The core of the report comprises case studies of two sectors in which AI already plays a substantial role: (1) hiring and human resources; and (2) warehousing and related logistics. Here, we summarise the report and reflect on its implications.

Adoption of AI in the US and the EU

In the US, very few firms have adopted AI; however, the proportion of workers exposed to the effects of AI is considerably greater than the number of firms would suggest, because US firms that adopt AI tend to be larger and younger (and are typically led by younger, more educated and more experienced owners). Uses include the application of AI/ML to business processes, machine vision (automated data extraction from images) and natural language processing. Adoption is higher in the information, professional services, management and finance sectors, but is also above average in retail trade, transportation and utilities.

Larger firms in the EU are also more likely to apply AI. AI is most often used to automate workflows, as a base for machine learning and to analyse written language. Chatbots are a growing application. AI is most used in finance, education, health and social work, except for robotics which is mostly used in manufacturing.

Benefits and risks from adoption of AI/ML

The benefits and the range of application of AI/ML have now become obvious. In the hiring process, to give just one example, the report notes that AI can draft job descriptions, match job requirements with applicant skills and filter out applicants who are a poor fit. It is increasingly possible to apply AI/ML to tasks that had once been thought only by humans could carry out, including in many cases tasks that have traditionally been performed by the highly skilled. New applications are being enabled by advances in natural-language processing and computer vision. Businesses seek to scale up, to lower costs, to make better decisions.

The report’s authors asked the AI text and prose-writing application GPT-3 for its view on the societal risks of AI. It identified four: job losses, inequality, security risks and ethical concerns. Many human experts would share these same concerns, but some might identify more.

An obvious risk is indeed that AI/ML might reduce the number of workers needed. Even if the total number of workers remains stable or increases, the skills required are likely to evolve because of the changes arising from AI/ML, together with other aspects of automation. The report argues that while all workers tended to benefit from technological change between 1945 and the 1980s, in subsequent decades most of the benefits went to highly skilled workers. The advent of AI/ML possibly signals a new phase in which not only jobs at low and medium skill levels are at risk from new technology (as has already been clear for many years; see Brekelmans and Petropoulos, 2020), but also the jobs of higher-skill workers.

In line with a growing body of research, the report notes a tension between the use of AI/ML for increasing the efficiency of workers (referred to in the literature of AI in the workplace as augmentation), versus the replacement of workers by means of AI/ML (referred to in the literature as automation). Even if the pace of new job creation remains in balance with the pace of job destruction – which is likely but not assured – the skills needed are sure to change in a great many professions.

Case study 1: the impact of AI/ML on hiring and human resources

Nearly every aspect of the hiring process has been dramatically influenced by the advent of AI/ML technology. AI/ML can help to draft job descriptions, screen qualifications to avoid wasted time for candidates and potential employers where there is a poor fit, conduct and score scientifically validated tests of promising candidates, and match job listings with qualifications with a rapidity that a human HR specialist could not dream of equalling.

For applicants, AI can likewise facilitate the job search, and may make the applicant aware of new and unexpected ways in which her talents might be applied.

But AI/ML has not replaced the need for human expertise. If anything, it has shifted the focus, making human skills especially important in high-value activities that are not easily automated, such as negotiating final job offers and convincing desired candidates to accept job offers.

A huge worry is that expanded use of AI/ML “could potentially introduce bias across nearly every stage of the hiring process. … Machine learning algorithms [might] give the appearance of a fair and clean mathematical process while still exhibiting biases” (TTC, 2022). Examples of this are already visible[6]. Systematic bias might be inherent in underlying training data for AI/ML. Various forms of bias might be present even where the bias is unintentional.

Case study 2: the impact of AI/ML on warehousing and logistics

With the shift to just-in-time manufacturing, equivalent modernisation of retailing and the broader shift to globalised value chains, warehousing has changed from a somewhat pedestrian activity to a core activity that plays a significant role in labour markets and in the gross value added of developed economies.

An analysis in TTC (2022) of value added and inflation-adjusted earnings in Germany, France, Italy, Spain, the Netherlands and the US demonstrates that value added has tended to increase faster than earnings – in other words, firms (and their shareholders) have appropriated more of the gains of increased productivity than have workers. Possible reasons include: (1) declining effectiveness of the representation of workers in this sector, eg in trade unions; (2) the relative fungibility of the relatively low-skilled workers who are needed in this sector today; and (3) increased ability of firms to use algorithmic surveillance to drive higher worker productivity, coupled with limited visibility into the functioning of the algorithms on the part of workers or the public.

Indeed, the report treats the growing use of intrusive algorithmic surveillance (Nurski and Hoffman, 2022) as a significant public-policy concern in its own right.

An additional factor potentially of great significance is that in this sector, there is good reason to believe that automation effects (ie replacement of workers) have been more prevalent than augmentation effects (making workers more productive without reducing worker numbers). This inherently implies a decline in the bargaining power of workers.

Reflections on the report’s conclusions

The report identifies only a small number of potential policy interventions, and these are discussed only at high level, but the recommendations are sensible as far as they go:

  • Invest in training and job-transition services so that the employees most disrupted by AI can transition effectively to new positions for which their skills and experience are most applicable.
  • Invest in the capacity of regulatory agencies to ensure that AI systems are transparent and fair for workers.
  • Encourage development and adoption of AI that is beneficial for labour markets.

The need for modernised, flexible job training in response to all forms of automation has been obvious for some time (Petropoulos et al, 2019). Not stated in the report but equally obvious, is that a shift from traditional education to lifelong learning is likely to prove necessary. A challenge in both the US and the EU is that the responsibility for education is largely delegated, respectively, to state and member-state level, and is often further delegated from there, implying challenges in driving consistent, coherent and effective overall policy change. Beyond that, the pace of technological change may currently be greater than the speed at which additional education and training systems can adapt. Furthermore, the speed of change makes it difficult to know exactly what skills are likely to be needed in future.

Enhancing regulatory capabilities can be expected to be challenging. Moreover, this is the area in which EU-US differences in approach are likely to be most evident. The report’s authors focus specifically on addressing bias in hiring algorithms, and on mitigating excessive electronic surveillance in the workplace, but these likely represent only the start of what must ultimately become a broader discussion. The risk appears to be substantial (Aloisi and De Stefano, 2023) of piecemeal approaches and divergent practices to automated decision-making and intrusive algorithmic surveillance between the EU and US, and for that matter within the EU and within the US.

In terms of encouraging the development and adoption of AI that is beneficial for labour markets, the report identifies three approaches: promoting research, using public procurement and adjusting incentives to encourage firms to place greater emphasis on helping workers become more productive, rather than replacing them by means of automation.

These are sensible recommendations, but they go only so far. Research is a necessary but not a sufficient condition for a satisfactory response to the many challenges that AI/ML poses to the workforce, and the same can be said for public procurement. The third approach, dealing with incentives for firms, is likely to be critical. As the report notes, challenges “… include firm business models that promote cutting costs, economic distortions in the tax and regulatory space that increase the cost to firms of using labor relative to capital, and even the ‘aspirations of researchers’ at private firms who are excited and motivated to develop branches of AI that are more suited to automation ... All these channels might push the society towards an undesirable equilibrium in terms of the balance of automation and augmentation AI technologies”.

In other words, as already visible in the warehousing sector case study, companies may be more motivated to eliminate jobs than to make workers more effective. The report’s observations on incentives for firms are broadly correct, but considerable work will be needed to turn these reflections into effective practice.

Footnotes

[6] Jeffrey Dastin, ‘Amazon scraps secret AI recruiting tool that showed bias against women’, Reuters, 11 October 2018, https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G.

References

Aloisi, A. and V. De Stefano (2023) ‘Between Risk Mitigation and Labour Rights Enforcement: Assessing the Transatlantic Race to Govern AI-Driven Decision-Making through a Comparative Lens’, European Labour Law Journal (forthcoming), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4337517

Brekelmans, S. and G. Petropoulos (2020) ‘Occupational change, artificial intelligence and the geography of EU labour markets’, Working Paper 03/2020, Bruegel

Nurski, L. and M. Hoffman (2022) ‘The impact of artificial intelligence on the nature and quality of jobs’, Working Paper 14/2022, Bruegel

Petropoulos, G., J.S. Marcus, N. Moës and E. Bergamini (2019) Digitalisation and European welfare states, Blueprint 30, Bruegel

TTC (2022) The Impact of Artificial Intelligence on the Future of Workforces in the European Union and the United States of America, economic study prepared in response to the US-EU Trade and Technology Council Inaugural Joint Statement, US-EU Trade and Technology Council, available at https://www.whitehouse.gov/wp-content/uploads/2022/12/TTC-EC-CEA-AI-Report-12052022-1.pdf

About the authors

  • J. Scott Marcus

    J. Scott Marcus is a Senior Fellow at Bruegel, a Brussels-based economics think tank, and also works as an independent consultant dealing with policy and regulatory policy regarding electronic communications. His work is interdisciplinary and entails economics, political science / public administration, policy analysis, and engineering.

    From 2005 to 2015, he served as a Director for WIK-Consult GmbH (the consulting arm of the WIK, a German research institute in regulatory economics for network industries). From 2001 to 2005, he served as Senior Advisor for Internet Technology for the United States Federal Communications Commission (FCC), as a peer to the Chief Economist and Chief Technologist. In 2004, the FCC seconded Mr. Marcus to the European Commission (to what was then DG INFSO) under a grant from the German Marshall Fund of the United States. Prior to working for the FCC, he was the Chief Technology Officer (CTO) of Genuity, Inc. (GTE Internetworking), one of the world's largest backbone internet service providers.

    Mr. Marcus is a member of the Scientific Committee of the Communications and Media program at the Florence School of Regulation (FSR), a unit of the European University Institute (EUI). He is also a Fellow of GLOCOM (the Center for Global Communications, a research institute of the International University of Japan). He is a Senior Member of the IEEE; has served as co-editor for public policy and regulation for IEEE Communications Magazine; served on the Meetings and Conference Board of the IEEE Communications Society from 2001 through 2005; and was Vice Chair and then Acting Chair of IEEE CNOM. He served on the board of the American Registry of Internet Numbers (ARIN) from 2000 to 2002.

    Marcus is the author of numerous papers, a book on data network design. He either led or served as first author for numerous studies for the European Parliament, the European Commission, and national governments and regulatory authorities around the world.

    Marcus holds a B.A. in Political Science (Public Administration) from the City College of New York (CCNY), and an M.S. from the School of Engineering, Columbia University.

Related content