Well, welcome to our podcast on the New Shape of Work. I'm Kate Bravery, head of talent advisory at Mercer. And today, I'm diving deep into AI governance and ethics, specifically how we can ensure that the AI we use in our organization is being used in an ethical and responsible manner and maybe I can say that overused phrase of also, in a human centric way, the way that doesn't just protect our businesses, but also protects humanity. Some big questions for today.
And so I'm really glad that I have my dear friend, an exceptional expert in this space, Jason Widjaja, who is leading the functional on ethical AI at MSD or I think it's known as Merck in the US. It's a global health care company that really is, I think, on the forefront of research, pharmaceutical interventions, and vaccinations.
So Jason, I know we've done a few things together on stage and what have you, but I'm really glad that you can make some time today to join me on the podcast.
Well, thank you so much. Thanks for having me here today, Kate.
Yeah, I'm really looking forward to our discussion because, honestly, AI is everywhere. New laws are coming in as we speak. And I think everybody is focusing on, am I doing enough to govern the way AI is being used? Are we using it in an ethical way? Am I complying with some of the new legislation, particularly the EU laws, which I think are some of the most stringent. So they are setting that bar globally.
Absolutely, yes.
And then how do I just stay on top of the fact that even if I feel I'm where I need to be, in two weeks time, it's going to have moved on, both the AI and both the legislation? So a lot to contend with.
All right, well, Jason, maybe we can kick off today with just hearing a little bit about how you ended up being such a leading voice on this particular field, because I know you're involved in quite a few activities across the Asia region.
Yeah, thanks a lot, Kate, for having me. And I think you're absolutely right that the industry is moving very quickly. And for myself, I've been involved with my firm for around nine years. And I do have a core, my AI value and delivery group that is focused on my applied research to help the company navigate advancements in AI.
And I think having that research focus group to help companies make good decisions is a very valuable piece of the puzzle. But I think for today, we'll focus more on the AI ethics and governance portion.
And this is also very much an emerging capability, where different organizations do it differently. I'll certainly share a point of view on how we think we should be doing it. And I will say that this is one of those areas where having this cross-functional expertise working together is very crucial, but also very difficult.
For myself, I have a bit of a analytics and data science background, as well as a comp science background. But interestingly, I also did a degree with some HR in it. And so I always appreciate speaking to HR practitioners.
That's great.
As we explore, during this call, there are many other pieces of the puzzle that need to come together for AI governance to succeed as well.
Yeah, I like the fact that you say it's an emerging competence because I think we all don't feel fully competent in this area. And I think it's been wonderful talking to you over the last three years because it really has changed.
And I know that within your team and some of the work that you've been doing at MSD, you have pivoted and pivoted again as that environment has changed. And you mentioned there about making good decisions.
And I know that you've got a research group specifically on that. But isn't ultimately most AI about ensuring we make good decisions, whether we make them on our own or with computers? So I'm sure we will cross over to that as well.
Yeah, that's right.
Well, we started talking, I think, when ChatGPT exploded onto the scene. And it's now been a couple of years. And I think everybody was talking about, gosh, what's the art of the possible as we bring that into our businesses?
And I think because of commercial realities and just understanding about, what's the ongoing costs? What's the environmental footprint of generative AI? People have started to say, OK, what's the science of the practical? Or where can we really start seeing the commercial return?
And I hear that from some of HR clients over the last maybe six months. And as someone who monitors this space, what do you think has been the change from that initial hype to what's getting people excited now?
Yeah, thanks for that. And I think ChatGPT is only less than two years old, but it seems like a really, really long time. In two years, we have gone through, probably, at least three major iterations of AI products similar to ChatGPT. So yeah, the initial ChatGPT does seem quite a long time away.
I think from a workforce perspective, there has been quite a lot of publications on the initial value of ChatGPT-like solutions on the workforce. And so they come in as offering a productivity improvement, but also requiring a change in ways of working.
So without quoting the actual numbers, which I can't precisely remember off the top of my head, we basically have maybe sometime between a 15% to sometimes 30% uplift within certain activities.
And that's important because some types of work is a lot more amenable to improvement by generative than others. But at the same time, it requires a fair bit of process and human changes.
At a very basic level, if you think about the way generative AI helps you, you would be able to generate your first draft a lot more quickly, but then you need to spend more time fact checking because of the possibility of hallucinations.
So even though the net time might be faster, so there is some improvement, you spend more of that time tracking as opposed to drafting. That's just a very basic example.
But across the footprint of Gen AI implementations, you have this same pattern where it's like it works better in some contexts than others, but it requires a change in process, and workforce training, and mindset to realize that value.
Having said that-- that was, I think, maybe the first 6 to 12 months. Right now, I think technology is advanced and also our comfort with it and familiarity with it has increased.
So with the acknowledgment that everyone is on this journey and there's a whole distribution of maturity out there, I do think there's a greater appetite nowadays to go for large transformative use cases, not just saving 10 minutes on a holiday plan, but help me improve my business in a significant way using generative AI. And I think that's where many companies are at today.
The other side of it is that we also need to think about generative AI through the lens of both value and risk. And so even as we want to claim more value, we also need to mature, I guess, our generative AI ethics, compliance, and risk muscles as well so that we have the proper controls, guardrails, training, and so on in place to realize the value while not causing harm while you're doing so.
Well, we'll come to that value and risk lens because I think it's so important to balance the two, but I agree with you. And I'm always just shocked that it's just two years. You're absolutely right. At the beginning there, there was a really good article, I think, by the Harvard Business School on the jaggy frontier of AI.
That's right.
They had some really good experiments about, what actually deliver the value? And what did you think deliver the value? And it was preparing information for a board pitch to get investment. The typical thing many people do within the organization.
And what was fascinating things that looked very similar when you had actually-- I think it was some BCG consultants doing that work. When you had things that were very similar being done, but there were small nuances, one actually led to a 30% uplift in quality outcomes and time savings.
And then the other one, actually people felt very confident that it had, but actually it was the wrong answer. But things have moved on because I think part of that reason was the handling of numerical information.
And I think now that-- just in the last few weeks is the one where I think some of the most tremendous leaps have come into being. And we've also learnt a lot more about how more domain specific or small language models can actually play a more vital role.
So I agree with you. I think things have really, really moved on. You mentioned those productivity improvements. One of the things we have from global talent trends is our executives believe it is going to be between 15% and 30% uplift.
And if you want to see where that uplift can come, there's a really good website on the Oliver Wyman Forum, which actually looks at each function and industry. What are the predictions of those different uplifts?
But again, it depends how you're looking at that. I know that when we do work redesign, we can be unlocking up to 50%. So I think it does depend on how you're bringing it in. But I agree with you, that move from these small productivity gains, which we all get excited about to large transformational use cases, I think, is really exciting.
And that requires a whole different discipline within the organization. So maybe we can go there. Within your organization, how are you governing which use cases get funding and which ones maybe have that commercial return? Because I can tell you from a Marsh McLennan experience, we have a lot of projects. And on the one hand, I love that. On the other hand, it's damn exhausting.
Yeah, absolutely. I think that makes a lot of sense because, I think, the difference between the generative AI that we encountered today and the AI that was a lot more prevalent, say, like 5 to 10 years ago is that generative AI is very much a consumer technology.
So it's no longer the domain of maybe 5% of the organization who are trained in data science. It's now something that the entire organization can use. But the implication of that, it means that ideas now come from the entire organization.
And we now have these massive funnels of use cases and opportunities to go after. And yeah, I definitely hear you that sorting through them and prioritizing through them is quite exhausting.
I'm going to do something a little bit simplistic, which is to give a rule of thumb on where you should apply generative AI. I think I read a saying that said that you should use generative AI when you already know the answer.
Interesting.
So you would use generative AI, for instance, to draft a document in your area of expertise where you're able to judge the quality of it and fact check it quite effectively.
You would not use it for something that, for instance, is in a foreign language, which you are not qualified to check or maybe a very deep technical domain that you're not qualified to check.
So even though I work in the pharmaceutical industry, I would not be qualified to check an article on maybe human biology. So yeah, I think a nice rule of thumb to test where you should best apply it would be, is this a domain where you already know the answer before you hit send on your generative AI solution. Just something quick out there.
I love that. I think that's a really good-- and I love rule of thumbs. I think that's really, really helpful. It already gets me thinking about where we-- maybe we shouldn't be exploring it, but I think it's very relevant given if we think about ChatGPT 4 and the ability to see, hear, and communicate at the same time.
It does bring us into some of those realms, which is moving so fast and it's exciting, but it is areas that is absolutely outside our expertise, such as you mentioned-- translations and things like that. Fascinating.
One of the thing I know from our prior conversations, you've also spoken about-- and I think you actually shared this analogy with me. Sometimes you don't want the American MBA answer.
Yeah, that's right.
Sometimes you want the PhD student answer. And I think that's changed the way you've been thinking about bringing access to large language models into the business. Can you share a few words on that?
Yeah, absolutely. So I think if you were to listen to-- if you read 100 articles on generative AI, probably 99 of them would mention ChatGPT. But ChatGPT is actually very much a general purpose model, which performs well on a really large variety of tasks.
However, if you go to a very specific technical domain like law, or science, or some sub portion of engineering ChatGPT may actually not be the best language model in the world. And you might have a much smaller, much cheaper specialist model that outperforms it, not generally but just in that specific domain.
So trying to grapple with this idea, the good analogy would be some graduates are like MBA graduates, where they are fairly proficient across a wide number of domain areas, but when pressed, they may not be a very deep expert in, say, in chemistry versus someone who had a PhD in chemistry.
And without stereotyping anyone, you know, it could be the case-- it's often the case that someone who is a deep technical expert may not be as proficient in the broad tasks, but when it comes to the area of expertise, they are top in the world.
And language models follow this pattern very much. The implication of that for companies and teams who are building capability is that you don't really want to have 10 MBAs and no PhDs. You might want to have a few, strong general purpose, large language models.
But at the same time, depending on which industry you are, you might want to have some specialists in the domains that you're most interested in so that you can really extract the maximum value from generative AI.
I like that. It's a real analogy. You don't want a team with loads of really head banging experts all together. You don't want a team full of generalists. So I think we should be thinking about AI in the same way. Absolutely fascinating.
You mentioned earlier the productivity drive. At Mercer, we talk about there's three big opportunities. There's the opportunity for prediction, which we've been using a lot of particularly with scenario planning. And that's very exciting.
There's also the personalization piece, whether it's personalizing communications, or the total value proposition, or personalizing individual development plans, or even AI coaching. All of that is really exciting.
And then there's productivity, which, of course, all the executives want to speak about and we inherit some of those expectations. But when I hear from people within the organization about what do they hope AI will solve, everyone says, I think we have the answers in our business. We've got huge amounts of information about how we work, thought leadership, our clients. Help me navigate that vast amount of information.
And when I last spoke to you about this you talked about, well, that's going to get a lot easier because we're going to have this agentic workforce, with machines working for machines to get the information for you. And that didn't quite play out.
And I wonder if you just could tell me about why that's not moving so fast and maybe how people are solving this information overload that we seem to be living under at the moment.
Yeah, so I think just to wind back to the problem you're trying to solve here. So Kate, I think you talked about, I guess, people struggling under this massive information overload. And that is definitely a problem in us trying to work in this digital economy.
And we should also ask, what are present solutions that are being found wanting and insufficient that's driving people to consider generative AI?
I just want to make a distinction between the way that generative AI answers questions and the way that search engines like Google search or any search engine answers questions.
When you put a search query into a search engine, what you get is probably 10 pages and maybe a few advertisements. And it's up to you to figure out which of these pages has your answer and then navigate to the answer yourself.
Most likely, what you're looking for is on the first, or second, or third search result. Somewhere in that page on page 7, second paragraph, that's where your answer is. So that's the search experience today.
What generative AI could potentially do is to go from web pages and documents to just the answer and knowledge that you need. So if you think of how you may query a ChatGPT or a similar product, you're not getting 10 different pages. You're getting the exact answer, hopefully with a citation that you can track.
A nice analogy to think about this is if you go to a library, the search engine is like, all right, I'm looking for a book on something random, maybe like a tuna fish. And the librarian will point to you, here are 10 books on tuna fish. Your answer is somewhere in those 10 books.
What generative AI does is not point you to 10 books. It just tells you the answer to your tuna fish question is X. So you're going from documents to just that snippet of information that you want. And that's very powerful.
So I think that's the promise and potential of the future of search using generative AI. But coming to agentic workforces, I think for those who are not familiar with that phrase, I think it's starting to become very much a buzzword.
But aside from the technical background behind agents, the main idea is simply agency. So in the chatbot, you will ask-- you are talking to it and it's mostly talking back.
In an agent, that system has more agency. So it doesn't just answer back it can actually go forth and do things for you. I think at the most extreme, imagine if someone could check your email, draft replies, and make sure that the right documents are attached to each reply.
And when you actually open the email box, imagine if all your replies are pre-written in your voice with the right attachments for you to just check and hit send. So that is the promise of agents. However--
[INAUDIBLE] agent, by the way, I think I would love to have that in the morning.
Yeah, absolutely. I think that would be really the next level of productivity. But the tricky thing is if you think more carefully onto what it takes for an agent to do that, you're giving an agent, which is basically an entity that is not you, access to your private information, access to things that speak to your style of communication, perhaps access to your passwords if you want to make a transactional-- access another system, maybe even sensitive information like bank account details, personal details, and so on.
So I think agents is currently in a space where the technology has slightly outstripped our ability to secure them. And so until those are solved, I think agents are still somewhat risky today.
I do think that the general trend of having more intelligent systems that will be able to help us in broader and broader things and go further and further is the general trend. But I think we do need to also come up with new postures, new approaches to secure these agentic systems.
Well, I think we're going to come onto that in a moment because I think it is fascinating that so often we're waiting for the tech to catch up on the vision. And now, we've got the tech, but we don't have the risk and governance mechanisms in order to make it.
I think that does put more pressure on leaders and HR departments to really be leading the charge on some of that. And it was interesting when you were talking about, as generative AI feeds back that information, it might give you that citation to check. I love that.
But I think there is some concerns when it's just a black box and you don't know where that's coming from. There are some other risks that that bring into effect.
And I know this is new technology that's come out that can, for knowledge management departments, also categorize information so you can start to reprioritize and just have a little bit more control over that. And that will move us in the right direction.
But let's now move on to how do you govern some of this because I think the risks are here. And if we haven't started on this journey of building out our governance practices, we are already behind.
So maybe you can get really pragmatic with us around, how do you govern AI in practice? How do you know what use cases people are using? How do you arrange the right department and cross-functional team to monitor it in the flow of work?
Yeah, thanks for that. And so maybe let me answer this in two different ways. Firstly, I think earlier in the call I alluded to how different departments need to come together. And I wrote a small publication recently on how we think they can come together effectively. And I can share a link to that.
But we do have also an acronym. And the acronym is LACES, L-A-C-E-S. And I think the idea of LACES is that we're trying to hold all these different disparate groups together, hence the acronym LACES. And also, even though LACES has five letters, there's actually more than one team in each letter.
I will just list them out. So the first L stands for legal. As you know, we are now have upcoming legislation regarding AI. So legal would interpret that. The second L is L&D. And this is very much in the HR space.
Because it is a consumer technology, it's not something for just a small group of technical experts to wrestle with. It's something that we need a broad based education. And we can talk about this a bit more.
Coming to A, the first A is AI. So we do need, ideally, internal teams with the expertise to help guide and provide technical input to the governance groups. The second A is audit. So we also want to tap on the audit and assurance capabilities to better test our internal controls.
Coming to C, the first C is cybersecurity because with AI also comes a whole bunch of new gen AI specific cybersecurity risks. And there's a lot being written in this space, like new technology risks and security requirements for things like prompt injections and so on.
The next C is a compliance. So this is really the internal interpretation of external legislation and how that would play out within companies. We also have a third C, which is comms and policy because we do want to-- we cannot expect our organizations to directly read these 400 page legal documents. And so we do need to synthesize them into policy and communicate them out.
E stands for ethics. And the second E stands for ESG, which is really-- even in the absence of strong company positions, we do want to have avenues to do the right thing when you find AI going wrong.
And finally, S is a strategy because we do need to tie all this into overarching company strategy. So it is quite a mouthful. And 10 teams is a lot.
I'm just rapidly writing it all down here. So I love the idea of LACES because it ties it all together. But am I right in thinking that this basically is touching on about 10 functions, if I've got that right. So there's legal, L&D, cybersecurity, compliance, comms and policy, strategy, ethics, and ESG, audit. And I think I missed the other A.
AI.
AI, OK. Wow, that is a lot of teams involved. So how is that working in practice? How often are they coming together? And then how are they getting the right information to govern?
Yeah, so I think-- although we didn't set out to by saying that, all right, we need 12 teams, let's assemble the avengers kind of thing. It's more a case of, as we start to grapple with the implications of the technology, if we discovered that we needed various functions to play that part so that the whole can effectively govern emerging risks.
So they actually are spun into various work streams. So I think one common work stream, which is like no secret, is that all companies have just under three years as of now to comply with the EU AI Act.
And that would require coordination across multiple teams to update controls, update policies, and so on. So that's an example of a work stream that these 10 groups will work on.
Other possible work streams might be coming to have positions on disclosure when you are interacting with a gen AI system or using generative AI content, you might want to label it in some way, in a way that's consistent across your company.
Another thing is just the broad base of education and training that's needed across the workforce. And that's why we need comms, policy, L&D. It's not about technical expertise or AI, neither is it about legal and compliance solely. But it's really a workforce capability, education uplift that we are looking for.
And this is not just a nice to have because if you think about the ChatGPTs of the world, it's really not the system that is the risk, it's the usage of the system that is the risk. ChatGPT may encrypt its things. ChatGPT may be cyber secure, but it's the billions of prompts and tokens coming out of ChatGPT that is the risk.
And because of that, we have shifted our risk posture from the risk of systems to the risk of usage of the systems to generate content. So I think this is really a paradigm shift that I don't want to minimize. And I think the implication of that is we do need to start working at the level of users and usage, as opposed to just here is an IT control to lock down my system.
Yeah, absolutely. And obviously, the governing on use cases is very much aligned with the EU laws with regard to, what is the different riskiness of those use cases? And that has different levels of reporting and submission that you need to make. It is an evolving landscape.
And how are you staying on top of these different legislations? Because I think that's going to be the first of a number of different updates to legislation that's happening around the world. And quite frankly, it feels exhausting. Do you have a way in which you can stay on top of it or any advice for us who are listening in today?
Yeah, that's a tricky one. I think staying up to date is something that everyone struggles with. I will say that we have a few different radars scanning the environment.
So at one level, you want to understand, I guess, from a policy compliance perspective, what are the regulatory and legislative advances and events in the space? So we do have one like a policy/legislation scanning capability.
But I think equally important is the technology scanning capability. It's not just like, here's a new law, but also, like, did you know that now we have a model that can reason much better than the previous model? And what does that mean?
Did you know that deepfakes are now almost impossible to differentiate from a real image. So I think we would like to have both the policy and legislative sensing, as well as the technology sensing working hand in hand.
I love that idea of policy sensing and tech sensing. And the policy sensing, though, feels exhausting because there seems to be ever increasing numbers. Is there a shortcut for how we can stay on top of that or is there two or three key places that you tend to go to make sure that you're compliant?
Yeah, absolutely. The first thing I'll say is that this is a major problem because at last count, I believe there are over 1,800 distinct policy pieces around the world. And this is from the OECD AI policy observatory. And I think we expect no one to go and read 1,800 of these fascinating documents.
Maybe two places that I go to, the first one is the concept of crosswalks. So crosswalks are actually documents where they map major regulatory documents one to another, such that they are more interoperable.
So I think in the spirit of having more standardization and cohesion, globally, I think crosswalks are a great place to start. I'm currently based in Singapore. And I think one fantastic crosswalk is between the OECD, the NIST, and I believe Singapore and Japan.
So the second one, I think is also somewhat related, which is when I think about the frontier of thinking on AI safety. In the past 12 months, there were actually six institutes globally set up. And to me, reviewing these documents, they represent frontier advancements in AI safety thinking.
I believe they are the US, the UK, the EU, Canada, Singapore, and Japan as well. There's quite a overlap between the crosswalks and the AI safety institutes. And I think that is by design because they are really leading the various global communities in trying to find-- work towards I think, better global interoperability and maturity in AI governance.
Great, wonderful. Well, that's really great advice. And before I let you go today, we've got a lot of people who sit in HR on the line. And that was really great to hear just the focus on the policy and the L&D.
I think some of our concerns-- and Jason Averbook, my colleague, flags this all the time. Even if you've said to people, yeah, don't put sensitive data in ChatGPT. If they've got a smartphone in their pocket, they can just take a picture of it and feed it in.
And we've also seen, I think, from recent research from Microsoft that many people don't want to admit that they're using some of these large language models that sit outside the business for fear that that might mean they'll lose their job or some other negative impacts.
So it really is a new chartered territory. And I think that focus on training and ensuring people understand some of the values we want to uphold with AI ethical practice is just going to grow in importance.
But in-- you've been using AI a lot, obviously, in HR for years, whether it's machine learning applications and process optimization for recruitment or employee sentiment analysis. What do you think are the top use cases for HR moving forward?
Yeah, that's a really fascinating one. And I think I'm torn between answering that on tactical use case basis versus a strategic level conversation. So I'll just attempt to do one or two on each.
I think tactically, we have been struggling with diversity and inclusion for a while. And that's where the idea of bias creeps in as well. I will say that generative AI is a powerful tool to rewrite things like job descriptions, external communications, and so on in a way that encapsulates your company's values.
I also think that from an accessibility perspective, I think it's no secret that the majority of the world's digital content is in English, but many people don't have English as their first language. And I think having these generative solutions to help translate things really is a boon to accessibility. So I think that's it-- that's on the use case level.
The other thing is-- I think I find your comment on companies clamping down on use of generative tools, yeah, very, very real. But also I think it's one of those areas where if you try to police it, you're just driving the problem underground. So I do think that companies do need a generative strategy where you're able to safely enable these kinds of technologies.
I think one last element is that-- as you know, I run an AI governance team for, I think, coming to five years. And so to me, there's so much work in this space that it's very hard for me to imagine being able to execute all these things on the organizational and enterprise level without having AI governance as a function.
So my point of view is that AI governance represents really one of these industry 4.0 new skill sets, new departments, new functions. That is whether you choose to encapsulate it in a central department, it is a muscle that I think organizations need to build.
And the need for this muscle will grow in the days to come. So I think I just encourage everyone to consider building these muscles in your companies.
Yeah, great, great advice there. I love your comments there on the DE&I. I think on the one hand, there's been so much worry about, as we've got biased data, are we perpetuating that in the system through to-- actually, it's going to change the game.
And we've got AI apps that will nudge people if off cycle pay decisions. It will actually expand equity gaps. I think that's really positive. I was talking to a client yesterday. And they were talking about how are they using an AI facilitator.
And that's doing two things for them. One, it's making sure they stay on task with the meeting because the facilitator is saying this is the agenda. And we want to have more efficient meetings. So we're going to do it in 45 minutes, not an hour. I love that.
But they are also saying, none of the women have spoken up as yet or this is the budgeting call and finance hasn't weighed in. And so I thought, oh, gosh.
That's going to be very cool because you've got someone who doesn't have an allegiance to being a male, or female, or other wading in and reflecting back if a woman's been spoke over or if their ideas haven't been taken forward. And I think it's going to be hugely powerful in organizations.
And then the other one I've been hearing a lot about I think is the AI coach that are helping individuals. And certainly in Asia, just the increasing number of companies that have said, people are loving the fact that they can talk about preparing for a difficult conversation in a non-judgmental way. And I think that's going to be fascinating.
But it does bring back some of the topics that you and I chatted about over the years about, what's the value set that sits underneath the tools that our children and our employees are speaking with? But maybe we save that one.
That sounds like the topic for a whole-- either your next book or a whole different [INAUDIBLE], and it'll be something like--
Or yours, Jason. I think we're on the same line with that. We are absolutely at time. Jason, thank you so much for joining. It's wonderful to share the conversations we have with a wider audience.
I love the work that you are doing, always staying at the forefront of that. I know you've also been contributing to a body of knowledge on this to help other people that are in this space and also publishing yourself. We will make sure that we have those links on our website so that people can connect to it.
Listeners, thank you for tuning in today. I'm sure you're as passionate as we are about the topics. And my big takeaways today-- we've got to evaluate value and risk together.
We've got to think about the cost of using these tools and also the suitability for different use cases. And we've got to govern on those use cases, but also build your own Avengers team. I love that.
If you want more, please go to the mercer.com page. We've got a whole lot of research on AI, and some of the new AI applications that are coming up, as well as further communications on governance and ethics.
And also, if you want to hear other thought leaders or HR experts talking about some of these topics, do check out some of the other additions that we have on there. Thank you, everybody. Thank you again, Jason. And in the meantime, wishing everyone a energizing rest of day.
Thank you so much. Have a great day.
Thanks, Jason.
[MUSIC PLAYING]