We’re evolving. Mercer is now part of the new, expanded Marsh brand

Responsible use of AI in talent assessments 

Artificial Intelligence (AI) is advancing at unprecedented speed, and employees are adopting new tools just as quickly, making it hard to separate meaningful innovation from short-lived hype. At the same time, shifting regulations and evolving data privacy rules are raising the bar for HR and business leaders. In this environment, organizations need a trusted, governance‑first partner to focus on what delivers real value and deploy AI that is compliant, fit for purpose, and aligned to long‑term workforce needs. We believe AI should augment the human judgment of individuals and businesses, not replace it.

Our approach is guided by a core belief:

AI in talent assessment is most suitable deployed when it improves fairness, efficiency, or makes material inroads into the quality of decision making.

We bring together decades of expertise in deep assessment science with advanced AI capabilities to help organizations make better talent decisions while honoring ethical, legal and regulatory expectations around AI. 

Our work is guided by enterprise-wide standards established by Marsh's AI Center of Excellence. This provides governance protocols, oversight, and risk management capabilities.

We disclose where AI is used across our solution suite. Clients and candidates know when and how AI was involved in the process because trust is earned and not assumed.

Our AI strategy brings together advanced analytics, machine learning, and generative AI to help organizations: 

  • Identify patterns in large, complex datasets 
  • Deliver more timely, relevant insights to support decision-making
  • Automate repeatable tasks so people can focus on higher value work

Core pillars guiding the responsible use of AI

  • Strategic
    value

    AI enhancements deliver measurable business value, such as boosting productivity and efficiency through insights that support faster better-informed decisions.
  • Science-backed insights

    Psychometric-based, high-quality insights supported by clean, governed datasets and scalable reporting capabilities allow organizations to make talent decisions with greater confidence.
  • Responsible execution

    Responsible AI standards grounded in strong ethical AI principles embed privacy, security, and human oversight into the process. This helps our clients know there is oversight to prevent bias stemming from data sets, hallucinations and model drift.
  • Scalable deployment 

    Solutions designed to scale across roles, regions, and populations allow for consistent, governed AI capabilities wherever organizations operate. Regular discussions on how AI is changing the landscape and how our tools and platforms are keeping pace.

Governance designed for high-stakes talent decisions 

We operate under a layered governance approach:
  • Regulatory compliance
    Our AI-enabled assessment solutions are designed with applicable emerging AI regulations and standards in mind and undergo rigorous conformity testing.
  • Privacy and data governance
    Marsh's Global Privacy & Data Center of Excellence sets global standards and procedures for how personal information is collected, used, transferred across borders, shared, and deleted; helping ensure alignment with relevant laws and regulations (including GDPR and CCPA). All data processed through our platform is handled with the highest standards of confidentiality and security.
  • Responsible AI standards and human oversight
    Integrated ethics, fairness, explainability, and human oversight principles are applied across the full lifecycle of our products. No AI system makes final talent decisions; AI-generated outputs remain reviewable, explainable, and subject to human approval. 
We deploy AI only in ways that are safe, ethical, and aligned with our obligations to clients, colleagues, and regulators. 

How we use AI in assessments

Our AI capabilities are embedded into some of our assessments to enhance test security, compile summary reports, and help generate feedback and development suggestions.

What it does: AI analyzes large assessment datasets to identify patterns and trends related to performance, potential, and skills.

Guardrail: Insights are diagnostic, not prescriptive. AI surfaces patterns for human judgment; it does not make recommendations or take independent action. 

What it does: Rule-based and generative AI applications assist human reviewers with tasks such as tagging candidate responses, prepopulating report sections, and minimizing manual work.

Guardrail: All AI-assisted outputs are reviewed by a human before anything is delivered to a client. Final reports include disclosure of where AI was involved. 

What it does: AI-powered features improve the candidate and administrator experience by simplifying complex instructions into easy-to-follow language, offering relevant guidance when needed, and turning detailed information into concise, actionable takeaways. 

Guardrail: User-facing AI does not influence assessment scoring. Core scoring logic remains grounded in established assessment science and psychometric principles. 

Related solutions
Related insights