A mature CEO and her team of professionals having a briefing before the work.    
A mature CEO and her team of professionals having a briefing before the work.    

Artificial intelligence has the potential to infuse diversity, equity and inclusion (DEI) into organizations at a systemic level — if it’s used wisely.

When paired with a keen DEI lens, AI can ensure that foundational journeys and processes support the organization’s DEI strategy — and even dismantle historical bias that may have gone unseen. However, most US companies feel they are not prepared to thrive in the era of human–machine teaming. And as Mercer’s Global Talent Trends 2024 report reveals, only 16% of employees globally believe their organizations’ leaders are embracing AI and/or automation.

Those who don’t use or understand AI can’t reap the rewards, nor can they manage the impact of AI on DEI. Leading companies are keeping several risks and opportunities top of mind in their digital transformations.

The risks AI poses to DEI

AI could stall or reverse corporate DEI progress and even exacerbate the problems this work aims to solve. Only 33% of employees expect (or have already seen) positive changes in their workload due to AI and/or automation. This is partly because many firms try to keep their old processes when adopting new technologies. Paper forms become digital files that are easy to craft, copy and move, so the noise and clutter they fuel often gets worse.

From a DEI perspective, this trend toward business as usual can amplify the marginalization already present in the organization. Here’s what to watch out for:

  • Bias

    Generative AI outputs can perpetuate the biases found in the training data, algorithms and other inputs that guide it. Tools like Midjourney and ChatGPT were trained on a wealth of online data, including harmful and biased content which has been included in their outputs.

    Company data can also capture historically biased decisions. People from certain groups may have been hired at lower rates or received less positive feedback than others. AI models that aren’t trained on diverse, high-quality data and designed to flag concerns could simply repeat those patterns of bias from company records.

    Bias is also present in other AI applications. Facial recognition and analysis tools that are trained mostly on images of white men, for instance, can misread photos of women and people of color. And flaws in development can drive a host of predictive algorithms to make unfair, adverse decisions for marginalized groups on everything from loan applications to criminal sentencing.

  • Data privacy and security
    Without the right controls in place, connecting AI to company systems may harm employee data privacy and security. Colleagues could be able to access each other’s private data. AI vendors might collect, sell or mishandle personal information from customers and their employees. These vendors could even fail to secure such information from cyberattacks and identity theft by third parties.
  • The digital divide

    Perhaps the greatest threat from AI is the digital divide: an opportunity gap between those who can access new technologies and those who cannot. Much like computers, the internet and mobile devices, AI unlocks new resources and capabilities — and those who can’t use it won’t benefit.

    Part of the digital divide is material. At a minimum, using AI requires a computer or mobile device, which many people either can’t afford or don’t receive from their companies unless it’s essential to their jobs. Web-based AI tools call for internet access, but that isn’t cheap either — and it’s unavailable in some rural areas.

    Further, generative AI takes far more computing power than what most people can access for free. Some vendors do provide streamlined “free” versions of generative AI tools, but they often collect users’ personal information to subsidize the costs. The more advanced tools typically require that people or their employers pay a subscription fee — and even at large companies, it’s a significant cost that often limits the speed and availability of these tools.

    The digital divide is also about exposure. It takes hands-on learning to effectively use, understand and benefit from AI. People who don’t get to use it at home, in school or on the job won’t enjoy the same professional growth as those who do.

The opportunities AI brings to DEI

Despite the risks, AI’s potential to boost DEI is too great to shrug off; it can synthesize and enhance the knowledge and operations that are essential for building diversity, equity and inclusion — at work and beyond. Here’s how:

Amid fading budgets and buy-in, it’s no secret that DEI programs are folding, and DEI leaders are feeling burned out. Some Chief Diversity Officers (CDOs) are stepping down altogether, exhausted by doing it all themselves.

Enough is enough.

AI can help reduce the burden on DEI leaders and optimize costs. From drafting content to reporting on DEI metrics, AI’s ability to streamline and automate will finally free up time for CDOs and their teams to do more with existing resources.

Those in the DEI function aren’t the only ones who stand to benefit. According to Mercer’s 2023–2024 US Inside Employees’ Minds Survey, 51% of employees believe technology will help them be more efficient and effective at work. What’s more, they’re expecting generative AI to boost the efficiency of their companies’ operations.

Does analyzing company data for bias and inequity — across every employee and every dimension of diversity — sound like a massive undertaking? It was — until AI came along. AI can process huge datasets in a fraction of the time it would take most people on their own.

Likewise, auditing communications through a DEI lens is a vital task with little room for error. Corporate communications are often planned weeks or months in advance, but inclusive language and cultural contexts evolve in real time. AI can help organizations keep pace with change, check for concerns in their messaging and fine-tune communications for maximum impact.

AI can also foster DEI through better engagement. For instance, generative AI can help recruiters attract and engage diverse talent through more inclusive strategies, outreach, job descriptions and interviews.

Generative AI can also monitor employee sentiment and performance — and even guide management to better serve and retain talent from underrepresented groups. This coaching could include training simulations for interacting with different workforce segments.

Segmenting the workforce by common traits can be helpful — to a point. But focusing too much on segments could distract from the full scope of diversity across the workforce. AI can improve the employee experience through highly personalized engagement.

Generative AI, in particular, has the power to reduce barriers. It can synthesize, analyze and create information on users’ behalf to help bridge knowledge and skills gaps. This ability levels the playing field and opens new career opportunities to a broader swath of people from underserved populations.

Generative AI can also break through the language barrier. It translates text and speech from one language to another with remarkable speed and accuracy — and even manipulates live video so people can appear to speak other languages on their own. These capabilities will improve communication among multinational colleagues and organizations and empower people from low-income areas to work remotely for better wages.

What does this all mean for accessibility: the opportunity for people, with and without disabilities, to access resources with similar ease and effectiveness? AI supports text-to-speech and speech-to-text functions and even generates audio descriptions of visual content for people with disabilities. These assistive technologies create more opportunities for more people.

The way forward

Through careful planning and execution, companies and their people can channel the power of AI to reach new heights — without losing control. Mercer sees leading organizations embracing a few fundamentals to strike the right balance.

Establish AI governance

AI governance is a system of rules and oversight to ensure AI is developed and used responsibly. This system might include an AI risk committee (with diverse representation) and routine algorithm audits to root out any concerns. These safeguards can help governments and other institutions manage AI’s impact on people, the climate and the economy.

Firms are exploring AI governance for several reasons, including minimizing security threats and increasing opportunities for DEI. Companies can build ethical AI processes and policies into their digital transformations to mitigate the risks around bias, employee data and the digital divide. It’s about moving from problems to solutions: worrying less about designing the right thing and more about designing the thing right.

   
The process of how to incorporate AI governance into digital transformation activities to create DEI opportunities. How to move from finding the problem to creating a solution.
Governance also allows AI to improve DEI. It takes checks and balances, with a human eye toward nuance and empathy, to use AI for better engagement, fewer barriers and applying a DEI lens. AI governance can also ensure that the cost savings and productivity gains from AI are distributed equitably across the entire workforce, not just reallocated to more work.

Ensure equitable access

AI is poised to transform how we work, learn and connect. Given its potential, early adopters will have a clear advantage over the fence-sitters. But exposure to AI could vary by demographics, socioeconomic status and other dimensions. Expanding access to AI will help spread the gains and narrow the digital divide. Here are a few ways to accomplish this:

  • Education and public awareness campaigns to promote AI literacy
  • Investments in AI infrastructure (devices, internet hardware, cloud servers)
  • Funding assistance for AI-powered tools and related technologies
  • Accessible design in AI applications for people with disabilities
  • Institutional partnerships to synergize AI training, resources and opportunities

Foster career growth

Although expanding access to AI can also spark career growth, generative AI tools often cater solely to digital workers and corporate professionals. Not all industries are preparing employees to thrive with AI: It could soon automate a range of administrative tasks, and it isn’t needed for certain blue-collar jobs, such as frontline manufacturing.

These workers might leverage AI in the future to bridge gaps in their knowledge and skills. But they first need to know how to use it — and according to Global Talent Trends 2024, less than half of employees (45%) trust that their organizations will teach them the skills they need if their jobs change due to AI or automation. Organizations, therefore, have a part to play if we truly want to reap the benefits of democratized knowledge from generative AI.

The lack of AI expertise is also fueling a “techsistential” crisis in the global workforce. The current supply of AI skills among workers won’t satisfy the demand for these skills in the near future. So how might employers work toward meeting that skills shortage?

Reskilling and upskilling can solve the crisis and drive equity — both in career opportunities and in how the gains from AI are distributed. A portion of the time and money saved can be reallocated to training in AI that benefits the whole workforce. Such efforts deliver transferrable skills that can lead to better career pathing and employability, especially for people from underrepresented groups.

The future of humans and machines

The best practices discussed above aim to strike the right balance of human–machine integration, leveraging the promises of AI to transform DEI. Working with AI also pushes us to reconsider our individual human capabilities, thus bringing us even closer together and driving inclusion. Mercer provides a wealth of resources that turn the now of work into the wow of work. Join Mercer’s Generative AI Forum, or contact us for relevant DEI support to get started.
About the author(s)
Kai Anderson

Partner Executive Advisory

Ishita Sengupta

Senior Principal, Workforce Strategy and Analytics

Ryan Malkes

Principal, Head of Generative AI Advisory Services, Mercer

Gina Fassino

Associate, Transformation Consultant

Basia Matysiak

Diversity, Equity and Inclusion Consultant

Julia Gruenewald

Diversity, Equity and Inclusion Consultant

Related products for purchase
Related solutions
Related insights