We're evolving. Mercer is now part of the new, expanded Marsh brand

An AI-driven future for asset management requires a different kind of trust 

Artificial intelligence (AI) is being actively thrust into the financial services value chain, from research and portfolio construction to risk management and client reporting. As adoption lags the pace of change, with any period of disruption, one constant remains: trust. 

The foundations of asset management were arguably built on several core components, and one of these foundational elements was relationships and trust. Investors place confidence in institutions, in long-standing advisory relationships, and in the judgment of experienced professionals when making critical decisions. That trust has originated in human-to-human interaction and evolved to include corporate to corporate (aided by regulation), but the currency of trust is evolving again.

As AI becomes embedded in decision-making processes, trust is no longer just about people or firms; it extends to the systems, machines and moreover the data that feeds them and is increasingly being harnessed to make key decisions. The question for some asset owners and asset managers alike is not whether AI will play a role, but how trust can be maintained as that role grows.

The Foundation of Trust Is Data

Trust in AI is often framed as a question of technology. In reality, it is a question of data.

AI systems are only as reliable as the information they are trained on. Quality, relevance, and breadth of data directly determine the usefulness of AI-generated insights. A simple analogy applies: would you trust a recommendation based on four data points, or forty?

In our view, data depth and continuity matter. They allow AI to identify patterns, extract signals, and surface insights from an expansive dataset that may not have been economical, or even possible, through manual analysis alone.

Equally important is maintaining a human in the loop to help ensure an accurate data and model starting point and prevent “AI drift,” where systems are trained on outputs generated by other AI tools, gradually compounding errors. We believe human expertise remains critical to validating inputs, interpreting results, and ensuring models stay grounded in reality. 

From the Human in the Loop to Machine-Centered Trust

Today’s operating model is clear that humans must remain in the loop. AI tools support analysis and insight generation, but oversight is essential. Known limitations, particularly hallucinations and opaque reasoning, make human judgment an essential safeguard. Regulatory frameworks reinforce this approach, reflecting a shared caution that accountability must remain clear.

Yet this model faces challenges. Productivity “co-pilots,” automated monitoring tools, and advanced decision-support systems are beginning to supervise workflows that humans once managed directly. This raises a provocative but increasingly relevant question about what happens as AI systems begin to validate, challenge, or even override human decisions in defined contexts? If machines become part of the control environment, how do we establish trust when the validator itself is automated?

For many asset managers, the expectation is not full automation, but meaningful augmentation. The focus is on using AI to process more information, more consistently, and at greater scale, while preserving human accountability.

Principles for Trustworthy AI in Financial Services

For asset managers deploying AI, trust is not provided by a single control. It needs a framework. In our view these several principles are essential:

  • Data integrity: Inputs must be high quality, relevant, and verified.
  • Transparency: Users should understand the intent and logic behind AI-driven recommendations.
  • Governance: Clear accountability, escalation paths, and oversight mechanisms are non-negotiable.
  • Experience: Domain expertise and historical context must be embedded in models, not bolted on later.
  • Client-centricity: AI should not adversely impact outcomes in a drive for internal efficiency gains. 

When these principles are aligned, AI could become a tool to aid better judgment rather than a replacement for it.

Democratizing High-Quality Advice

One of AI’s most promising implications is access. By automating labor-intensive analysis and synthesis, AI could allow for high-quality insights to be delivered at scale. This could enable advisory models powered by an AI co-pilot to reach a wider asset owner universe and more diverse client segments without diluting quality.

Rather than driving decisions solely through cost reduction, AI can potentially expand opportunity sets, improve coverage, and support better investment outcomes. The goal is not to make the same number of decisions faster, but to make more decisions, consistently and at speed thus freeing up time to spend on value adding elements of the process.   

Client Perception and Responsibility

Despite progress, scepticism remains. Many clients still question AI’s transparency, intent, and ethical use, particularly when decisions affect capital allocation and long-term outcomes.

That caution is healthy. Trust is not created by adopting technology quickly, but by implementing it responsibly. For us, that means being clear about how AI is used, what it can and cannot do, and how human expertise remains central.

As AI becomes a generational technology for financial services, there is little time to “earn trust later.” Trust must be designed into systems from the start – anchored in data, governance, and experience. We believe the future of asset management is likely to be shaped by AI. Whether it is trusted will depend on the choices made today.

About the author(s)
Rob Ansari

Global Head of Analytics and Portfolio Solutions

Related solutions
Related insights