Employment is undergoing a profound transformation driven by progresses in artificial intelligence. Workers who acquire AI proficiency are increasingly differentiated from those who don’t. According to PwC, individuals with AI skills earned 56 per cent more in 2024 than those without, more than double last year’s gap [1]. New roles such as “prompt engineer,” “head of AI,” and “responsible AI use architect” have appeared across industries, and those who know how to deploy AI in the workplace are being promoted more quickly, recruited more aggressively and rewarded more generously.
Despite this acceleration, large-scale job displacement remains limited. Evidence from a Swedish analysis shows that only a small fraction of jobs in the Swedish economy is highly exposed to full automation. Some 68 per cent of roles can be assisted by generative AI rather than replaced, while only a marginal share faces partial or complete displacement. The report concludes that although AI will significantly transform the nature of many tasks, the overall number of jobs is expected to remain broadly stable because emerging AI-augmented roles are likely to offset potential losses in other industries [2]. PwC researchers argue that AI is primarily reshaping tasks rather than eliminating roles. PwC’s global workforce experts note that AI tends to redirect human effort toward higher-value activities rather than pushing workers out altogether. The pace of technological change, however, is posing severe challenges for skill development. Traditional approaches to professional development cannot match the velocity at which new AI tools are deployed [1].
While the labour market appears more resilient than many predicted, recruitment processes are experiencing a much deeper and more immediate disruption. Here, AI is not simply altering workflows but destabilising the core signalling mechanisms that traditionally allowed employers and applicants to evaluate each other. Recent academic work makes this breakdown visible. A study by Anaïs Galdin of Dartmouth and Jesse Silbert of Princeton analysed the impact of generative AI on cover letters submitted on an online freelancing platform [4]. According to their findings, AI tools initially improved application quality by enabling candidates to produce more polished and tailored documents. Yet once these tools became widespread, the informational value of cover letters collapsed. Before LLMs, higher-quality letters reliably signalled competence and motivation. After LLM adoption, cover-letter quality ceased to correlate with candidate ability, and overall hiring rates declined. Well-written cover letters with sophisticated vocabulary and complex syntax are now widely presumed to be AI-generated. Strong candidates became 19 per cent less likely to be hired and weaker candidates became 14 per cent more likely to succeed. Application volume and length both increased substantially, leaving employers inundated with homogeneous, AI-generated text. The authors conclude that the introduction of LLMs weakened meritocracy and reduced the efficiency of the matching process [4].
EIPS promotion

A similar dynamic emerges on the employer side. A recent study highlighted by MIT Sloan School of Management examined what happens when firms use generative AI to write job postings [5]. According to this research, AI-assisted postings were produced substantially faster and in greater volume, yet they were 15 per cent less likely to result in a hire compared with traditionally written postings. Although employers benefited from reduced drafting time, the increased ease of producing job descriptions encouraged the publication of roles for which firms had limited commitment or uncertain hiring intent. This phenomenon is consistent with the broader rise of “ghost” job adverts reported in the United Kingdom. The informational signal embedded in the job posting itself deteriorated: effort invested in crafting a job ad, which previously indicated employer seriousness, no longer provided reliable guidance. As a result, candidates spent more time applying to roles that were never truly open, while employers gained little in terms of improved matches. As the Financial Times’ “AI Shift” newsletter argues, these developments illustrate the difference between making tasks more efficient and making systems more effective. AI improves the former while often degrading the latter. [7]
Beyond efficiency, this signal decay carries significant financial costs. According to the U.S. Department of Labor, a bad hire, a consequence of poor screening, can cost a company up to 30 per cent of that employee’s first-year earnings when factoring in recruitment, onboarding, and lost productivity. If AI speeds up the application process but leads to more mis-hires, any early efficiency gains disappear quickly and get replaced by bigger costs and operational headaches down the line.
Interviews with HR professionals confirm that these patterns are not confined to online platforms or experimental research environments. As part of this investigation, I conducted an interview with Nicolas André, HR Solutions Director at VIP District, to understand how recruiters themselves perceive the ongoing transformation. According to Nicolas, the discourse surrounding AI in HR is deeply shaped by trends rather than operational maturity. AI has become a buzzword everyone feels obliged to mention. He estimates that approximately 80 per cent of HR conferences and studies today foreground AI, yet adoption remains uneven and often superficial. Many organisations aspire to offer personalised employee experiences but rely primarily on crude segmentation tools. AI is often introduced not to raise the quality of recruitment but to relieve recruiters of repetitive administrative burdens, such as answering recurring questions through chatbots, he told the EIPS.
More troubling, André reports that some early AI screening systems reflect highly rigid and sometimes discriminatory logic. He cites examples of filters designed to exclude candidates below certain age thresholds or to apply excessively binary classification rules. He emphasises that the capacity to adopt AI varies widely: in Spain, for instance, more than 90 per cent of the companies have fewer than ten employees, limiting their ability to implement sophisticated AI infrastructure. This disparity fuels a feedback loop in which candidates feel pressured to use AI tools simply to remain visible to automated ranking systems. Algorithms increasingly learn from the very AI-generated content they select, reinforcing a homogeneous pattern of “acceptable” applications. André argues that this trajectory risks producing a labour market in which diversity of background, experience and perspective gradually erodes.
At the same time, HR departments are discovering that effective use of AI requires new forms of expertise. André cites cases such as Accenture, which struggled during the early stages of AI integration in 2022 and subsequently hired AI-literate personnel at premium salaries. Some organisations now need dedicated AI specialists simply to operate their recruitment pipelines, a demand that did not previously exist and that further widens inequality between companies with resources and those without.
Despite these challenges, the recruitment industry is not retreating from AI. Instead, innovation is accelerating. A new generation of start-ups aims to embed AI directly into the workflow of recruiters rather than treat it as a peripheral add-on. One illustrative example is Strawberry, a company developing AI-native browsers capable of running autonomous agents [6]. Its product “Recruiter Ryan” is designed to search applicant pools, summarise profiles, map talent landscapes and identify relevant candidates with minimal human intervention. According to the company, tools of this kind can perform tasks that once required hours of manual review, illustrating both the appeal and the growing omnipresence of AI in recruitment practice. This development shows that even as AI undermines traditional signals, its integration into recruitment systems is intensifying because of the efficiency gains it promises.
At the same time, employers are well aware that the traditional tools of recruitment no longer work as intended. As the Financial Times reports [3], many firms now describe a hiring environment in which they are “drowning in applications” while simultaneously struggling to identify capable candidates. The widespread use of generative AI has blurred the once-clear signals that CVs, application forms and even online interviews were supposed to convey. Employers complain that candidates routinely feed assessments, from technical questions to asynchronous video interviews, through ChatGPT, making it difficult to know whether they are evaluating the applicant or the model. The problem is not confined to written materials. Even psychometric tests and technical tasks, long viewed as more robust indicators of ability, are increasingly vulnerable to real-time AI assistance.
What emerges from this landscape is not a story of technological progress smoothing the path to better hiring. Instead, it is a story of a system struggling to function when its core signals collapse. Once applicants can produce polished answers at no cost, and once employers can generate job adverts just as cheaply, the informational value of both sides’ inputs deteriorates. Recruitment becomes slower, less reliable and ultimately less fair. As the FT puts it, generative AI creates “a nightmare during the hiring process” precisely because it elevates weaker candidates and masks genuine talent. Productivity gains that occur after hiring are made may be real, but they are undermined if employers cannot trust the process that brings candidates in the door [3].
Edited by Maxime Pierre.
References
[1] “Rewiring the future of work”, PwC’s Global Workforce Hopes and Fears Survey 2025 (2025, November 12) https://www.pwc.com/gx/en/issues/workforce/hopes-and-fears.html
[2] The economic opportunity of AI in Sweden (2024, April), Implement Consulting Group https://cms.implementconsultinggroup.com/media/uploads/articles/2024/The-economic-opportunity-of-generative-AI-in-Sweden/The-economic-opportunity-of-AI-in-Sweden.pdf
[3] Sarah O’Connor, “The AI arms race in hiring is a huge mess for everyone” (2025, May 6), Financial Times https://www.ft.com/content/43cd01f9-ab95-4691-bc74-2403c87f5c17
[4] “How AI is breaking cover letters and leading to lower pay” (2025, November 13), The Economist
https://www.economist.com/finance-and-economics/2025/11/13/how-ai-is-breaking-cover-letters
[5] Dylan Walsh, “What can happen when employers use AI to write job posts” (2025, August 11), MIT Management Sloan School https://mitsloan.mit.edu/ideas-made-to-matter/what-can-happen-when-employers-use-ai-to-write-job-posts
[6] Strawberry’s official website https://strawberrybrowser.com/use-case/recruiter-ryan
[7] Sarah O’Connor and Josh Burn-Murdoch, “The AI Shift:is hiring becoming less meritocratic?” (2025, November 13), Financial Times https://www.ft.com/content/e5b7c3bd-925e-4907-a8fd-96b8e33353a5
[Image] Pexels. (n.d.). Gray Wooden Computer Cubicles Inside Room. Pexels.
https://www.pexels.com/photo/gray-wooden-computer-cubicles-inside-room-267507/. By Pixabay (https://www.pexels.com/@pixabay/). Licensed by pexels.com.



Leave a comment