How do firms safely activate generative AI? 

Young woman presenting speech chart
Young woman presenting speech chart

Many studies over the past few months have consistently shown that generative AI promises productivity gains.

The crucial question that emerges now as we move past the discussion of its economic potential is, "How do we safely activate?"

Firms must make a mental shift that generative AI is no longer just another IT project that is limited to and controlled by the IT department. It would require a firm-wide commitment to AI literacy, reskilling, and resource redeployment with people-centric guard rails.

The generative AI deployment framework that we suggest is provided below.

Generative AI deployment framework

Realizing the potential Risk mitigation Re-imagine the work
Desk users Data assessment Work redesign
Citizen developers AI literacy Reskill
IT & labs Guard rails Upskill
  Enterprise, environment and catalog  
  1. The Potential of generative AI

    Organizations have to grasp the distinct capabilities of Large Language Models, that distinguish them from other types of AI, to quickly tap into practical use cases. These capabilities have been already proven by many use cases in the market from writing text, translating, summarizing data, to creating videos and images. Early embarking on experimenting, organizations are more likely to discover quick wins that don’t even require extensive teams of data scientists or costly infrastructures. From our experiences, it’s evident that considerable values can be achieved even without the necessity of proprietary models or custom training.

    Moreover, understanding and evaluating these use cases should be conducted through the lens of user personas. While we generally identify three core groups of personas—desk users, citizen developers, and IT teams—your organization may identify additional segments. Recognizing these diverse segments is crucial because each will present distinct risks, necessitate unique guard rails, and require tailored upskilling programs. The desk user segment is the one most at risk of disruption with the traditional centralized approach to IT management often falling short of meeting their needs.

    The "desk users" refers to employees who typically use computers at their desks to perform tasks. Today there are numerous free or affordable AI tools available at desk users' fingertips to enable them to write, summarize, analyze, translate, or create images and videos by themselves. Hence, it is unrealistic to make this persona wait for instructions from data scientists or IT departments. The transition from horse riding to trains and eventually, to the mass production of cars is similar to the scenario at hand. What did people need the most during this significant shift in transportation? They needed driving skills and driving licenses, roads and traffic signs. Similarly, in our journey towards a generative AI-enabled workplace, the key lies in providing appropriate skills and creating robust guard rails.

  2. Mitigating the Risks

    Begin by forming an AI Risk Committee with cybersecurity, legal, compliance, and Large Language Models experts to oversee experiments and develop an organization-wide AI Risk Framework. Given AI’s rapidly evolving landscape and the ongoing maturation of associated laws and regulations, a structure to enable continuous monitoring is crucial.

    Second, start experiments with public data use cases to practically learn new skills such as prompting and develop a clear understanding of how-to mange potential risks including but not limited to inaccuracies, inappropriateness, copyright infringement, and biases.

    Third, successful experiments can lead to enterprise-wide catalogue of AI tools to enable desk users and deployment of LLM enterprise environment to empower citizen developers. Increasingly, instead of building or customizing their own, many companies prefer the “knowledge-based” approach on pre-trained models, shifting tech infrastructure and skills towards prompt-based API AI. This necessitates an overhaul of tech stacks and strategic decision-making regarding IT investment and data scientist redeployment. Together with tools and infrastructure, AI literacy, and guard rails need to be in place.

    Finally, the selection of vendors, tools, or models should be made in consultation with the AI Risk Committee. There are often hidden risks associated with foundation models that may not be immediately apparent. For instance, many open-source models built on top of GPT or LLaMA are consequently subject to Terms of Use of OpenAI or Meta. When vendors employ these open-source models, they introduce supply-chain risks that organizations- as buyers- might not fully comprehend.

  3. Building Skills

    We have seen the power of generative AI to democratize knowledge and creativity. Unlike previous iterations of automation that largely impacted repetitive, rules-based work, generative AI will also affect low-volume, highly variable work. The roles and skills needed for success will change as some activities are substituted and many more are augmented. In order to keep up with this rapid pace of change, companies will need to make work design a core capability to ensure the optimal combinations of talent and automation as AI continues its inevitable evolution. Deconstructing jobs to identify how generative AI will affect various tasks and reconstructing work to make the most of machine and human capabilities will be critical. This process of deconstruction and reconstruction will be pivotal to keeping the workforce relevant as it will shine a bright light on what skills are being rendered obsolete by generative AI, what skills are changing in their application and what new skills are being required. This insight will be critical to enabling the timely upskilling and reskilling of the workforce.

    With more news about job displacement and layoffs, AI will continue to make many people nervous. Now, more than ever, organizations need to lead with empathy and assume responsibility for global reskilling efforts.  Generative AI has the potential to redefine how we work, create, and innovate. But we need to approach it not as a mere technological advancement, but as an encompassing strategy with people at its core.

    Remember AI is not just an IT wonder, it is human co-pilot. It is not just about reducing cost, but about reskilling to capture greater value. The future isn’t just algorithmic, it’s anthropocentric.

About the author(s)
Sophia Van

Head of Global Digital Portfolio, Mercer

Related solutions
Related insights
Related Case Studies