McKinsey Quarterly

Don’t delegate the AI revolution: A conversation with Sanofi CEO Paul Hudson

Paul Hudson, CEO of Sanofi, has been driving an all-encompassing effort to use AI to reshape the way the company does business. He’s a vocal advocate for the technology and its great potential. Working with external AI partners, he’s pushing Sanofi to become the AI leader in its industry. What follows is an edited version of his conversation with McKinsey Senior Partners Hemant Ahlawat and Lareina Yee.

Lareina Yee: Paul, there has been a lot of change in the world since you became CEO of Sanofi in 2019. In technology, we have seen remarkable shifts, most recently with AI and agentic AI. How has this reshaped your vision of the future at Sanofi?

Paul Hudson: My first role at Sanofi was back in 1990, when the mobile phone was a breakthrough. I was here when the internet took off: In the slide decks we’d make, the last slide would always be, “We are considering having a page on the World Wide Web.” So, when AI started moving and we recognized it was real, we were determined the company would not be a laggard. We would try a lot of things. We would make mistakes, and that was all acceptable as long as we were learning.

We decided we’d go end to end on the company’s value chain. I think our competitive advantage will be that we go all in, end to end, while others are dabbling, doing proof of concepts, handing the project off to CDOs [chief data officers], rather than understanding its relevance.

Earlier this year, I set a challenge for my leaders: Disrupt your function with agentic AI by the end of the year. AI is the one thing that drives disruption.

Lareina Yee: As CEO, you are all in on the details of this transformation. You have told me that you won’t delegate this to the technologists.

Paul Hudson: CEOs in my generation delegate. We’re masters at it, and we build teams we can delegate to. But with the new breakthrough technologies, you’re instantly obsolete—even if you have an amazing CDO, as I do.

If you delegate, here’s how it goes: You’ll be in a meeting and hear about some AI breakthrough the company could use. At the next meeting, you ask, “What happened?” They say, “We delegated it to our CDO. We can build it ourselves. We’d rather keep it in our protected environment.” And then they hope that you will forget you ever asked. It’s corporate lingo, shorthand for “We have no idea. We’re a little embarrassed that we have nothing to show you, boss.”

The way we do it at Sanofi is a competitive advantage. Last year, I redeployed close to a billion euros in real time. We didn’t wait for a budget cycle; we had real-time business intelligence. Using AI, we reduced out-of-stocks by 80 percent, which is close to a billion euros, and we improved asset utilization by more than ten percentage points.

We look at everything end to end. Can we go cheaper? Can we go more robust in certain areas? Can we introduce a shop floor agent to improve asset utilization? Can an agent help with drug design and discovery? We’ve made some good decisions along the way. For instance, most companies have built AI in silos or verticals. Our leadership team decided four years ago to build it transversally so that we would be one of the few companies with a transversal data set that allowed us to go entirely along the value chain. We thought we might be crazy at the time, but it was a good call.

All of this is fabulous. It creates opportunities to simplify so much, to remove bureaucracy, to make workflows more efficient. It’s great for me because I get to redeploy all these resources elsewhere in the business, principally to chase miracles for patients who need help.

Hemant Ahlawat: I remember you saying that you want Sanofi to be the first biopharma company powered by AI at scale. If you look out five years, what would this look like for patients, doctors, and caregivers?

Paul Hudson: We know it will be a while before AI designs, develops, and delivers a medicine on its own.

When we are in a Phase I critical trial, we’re eight to ten years from launch, and we have a 90 percent probability of failure. In the medium term, we think that we might be able to use AI and generative AI to reduce the chance of failure to 70 percent. You’re still bringing forward a medicine that may not make it, but you’re much more efficient. You could, of course, drop those efficiency gains to the bottom line. But we’re an R&D-led business. Using AI in early discovery and development stacks the cards in your favor. So, you can now deploy the capital that was not at risk into more programs that reach for the miracles.

The trend is also true in Phase III trials, where you might spend $500 million despite having a chance of failure at something like 30 to 35 percent. The cost of a miss is massive. People don’t understand how much pharmas spend every year in pursuit of breakthroughs. Raising our chance of success from 70 percent to 80 percent would be incredible.

There’s so much opportunity to support patients on their journey. The holy grail, of course, is developing breakthrough medicines. But we can help reduce friction during their journey. We can help make sure that they have the right insurance coverage. We can improve our support of people in patient access programs. There are strict rules on how we communicate to consumers, but we can improve that. For instance, Hemant, you might read that package insert with tiny print in every box of medicine, but you’d be the exception. We’d rather use a QR code to direct people to a YouTube video, or to TikTok and Instagram. We’re trying to meet them where they are.

Lareina Yee: Improving the chances of drug discovery, of solving things that matter so much to humanity, seems like the holy grail. As you pursue that, you are working with ecosystem partners like OpenAI and Formation Bio. Tell us why this helps.

Paul Hudson: Big corporations can’t rely on their internal speed to match the transformation that is happening in the world. As soon as I know a competitor has decided to build something itself, I know it has lost. Big companies can’t attract all the right engineers, the right prompt writers, and so on. So many of those people want to be around other people like them in start-ups, accelerators, and scale-ups. They want to innovate, and many big corporate machines don’t want them to.

We do have a lot of those great people at Sanofi. But we work a lot with external partners and experts, people who might be better than us, who move faster than us. And then we try to create an environment where that energy can really take hold, where you can take risks, play with data, do things that nobody thought was possible.

Hemant Ahlawat: You haven’t kept your AI development in a walled garden. You once told me that the more people do this, the more my algorithms improve, the more we can move faster and get better. Why not be more proprietary?

Paul Hudson: We decided early on that whenever we partnered on AI, we would not ask for exclusivity. Anything we pioneered or innovated should be available to everybody. I went into healthcare because I wanted to do some good; as they say, a rising tide lifts all boats. If we work with outside partners to develop an algorithm or an agent that improves forecasting, we want all the players to be able to use the same algorithm.

Winning the AI revolution is not going to depend on who has an algorithm or an app. It’s going to be about who executes better than anyone else.

Lareina Yee: Have you encountered internal resistance to using AI agents?

Paul Hudson: We did at the beginning. Most people fear this revolution because they worry about their jobs. But the only job that gets replaced is the human who refuses to use AI, because human plus AI beats AI alone every time. The only human it replaces is the one who refuses to use it.

An epiphany for me was taking to heart the fact that the algorithm or the agent doesn’t have a career at stake. So, when we are doing a Phase III trial, with hundreds of millions of dollars at stake, we can ask the agent whether we should go forward or not, and the agent gives us a pure yes or no. It can be truly sobering to have an agent tell you yes or no. It has not been wedded to the project for 11 years, so its candor is exponentially different.

But our people rise to that. Whether you follow the agent’s recommendation is up to you. You need to make a conscious decision to reflect on its insight, but the decision to follow it or not is entirely yours. We’re never going to tell people to execute everything the agent says.

It’s kind of like Waze telling you the best route to somewhere you want to go. You decide whether to go that way or not.

Hemant Ahlawat: How has AI improved your life as a manager?

Paul Hudson: I’ll give you an example.

Like many CEOs, I’ve been in budget meetings where people say, “You know, the end of the year could be quite tough. So, I’m not sure that this growth we’re seeing now can be maintained.” They want to get to the end of the year with a budget that’s reasonable so they can accelerate the following year. Everybody’s hedging a little here, there, and everywhere. So, you may not deploy $500 million, $600 million, $800 million because it’s hedged. That’s a waste, because that money could be working for you.

I was with our brilliant team in Germany recently, and they wanted to show me their resource deployment agent. They had asked the agent to tell them where they were not optimally deployed. And the agent came through. It looked at everything and quickly came back, saying, “You’re a few hundred grand over here, a few hundred grand under here.” And it made a recommendation the team already had plans to deploy.

The agent is the great leveler. We don’t get the biases in resource deployment that we used to get. For me, it’s incredible. I now have the privilege of being able to look across all our major markets and consider whether the agent supports the fact that we have deployed as optimally as possible. That is a huge weight off my mind.

Hemant Ahlawat: As you reflect on Sanofi’s AI journey, is there a challenge that has been particularly hard to overcome, where you feel that things should have gone differently?

Paul Hudson: We were not bold enough at the beginning. I think our early projects were swapping out existing systems to be replaced by AI. We were a little too conservative.

The big one is that I underestimated the resistance movement. Back in 2021, when I was still fairly new, I’d take a few minutes to walk around and sometimes walk into someone’s meeting. I’d go in and say, “Hey, how can I help? What are you doing? What are you learning?”

One day, I walked into one meeting, and there were 32 people in the room. And I asked the leader, “What are you doing? “He replied, “We’re deciding how not to give you the data.”

Hemant Ahlawat: Wow.

Paul Hudson: I asked, “What does that mean?” He said, “Well, it’s not wise to give it to you without us telling you what to think about it.” He told me this with such honesty. That scared the hell out of me because I figured that if he’s telling me this to my face, what is everyone else doing?

Right then, I realized that any change management introduces can bring a great deal of fear. That isn’t just true for our AI effort, but for any new tool introduced into big organizations. People want to slow it down to a standstill to buy time to think about how it impacts them. I get it.

Lareina Yee: For CEOs who are just starting to dive into agentic AI, what are the one or two things you’d tell them to do to accelerate the effort?

Paul Hudson: For starters, you must make sure that you have a basic level of understanding of AI and agents.

Second, here’s a key idea that worked for us. Back in 2021, we decided to put together a sort of AI club, consisting of 12 people from different functions such as procurement, clinical operations, and so on. These were respected change agents with no AI expertise. We told them to work with one of our partners and disrupt their functions. We meet once a year, and I don’t ask them questions in between. When we do gather, they tell their peers how their group has evolved its agent and AI approach over the previous 12 months. This is a great way to involve your newer generation of talent, who do not want cascaded innovation. They want to participate, they want to shape, they want to do things in a very different way, and we really give them a chance to do that.

Finally, CEOs should not underestimate the personal energy required. As people have said, AI may be the greatest revolution since the printing press. If you’re planning to delegate the AI revolution, then good luck to you.

Lareina Yee: I love that. Don’t delegate the AI revolution.

Paul Hudson: You must be prepared to bring the energy. For example, with the AI club, I personally chair the annual meeting. I decide who attends. I ask the key questions. I want to see the innovation, and I want to know how it’s happening in the most critical areas of the company. I’m in a two-day workshop with the key disruptors in the company, and I’m removing roadblocks for them so they can go on to the next level.

Lareina Yee: Paul, there’s so much to digest in our conversation, but I think people would also like to get to know you a bit. I’d like to ask you a few rapid-fire questions.

You’ve worked all over the world, from Japan to Switzerland to the US. Now you live in Paris. What do you love about the city?

Paul Hudson: The food, of course—I was at least ten kilograms lighter when I moved to Paris. I really enjoy how people care so much about architecture and art. I love the culture of sitting outside at a cafe and having a coffee or an apéro. And now PSG [Paris Saint-Germain] has won the Champions League!

Lareina Yee: My team wouldn’t allow me to have this interview without mentioning football. You studied economics at Manchester Metropolitan University. Are you a fan of Man City or Man United?

Paul Hudson: I can’t believe you even have to ask me that! I’m a United fan, through thick and thin from when I was five years old. I’m proud to have been born in Manchester and attended university there. I was not the best student, to be honest. I preferred working to studying, so I count myself lucky to have been in a purpose-driven industry for 35 years.

Lareina Yee: Finally, you, Hemant, and I are parents. Let’s end with a bit of advice for the student trying to make their way in a world changing under our feet. What do you recommend they study?

Paul Hudson: Two thoughts. The first is about how they study. When ChatGPT first came out, two of my kids were still at college, and I did the cliché thing of telling them not to use ChatGPT for studying. “It’s got to be honest, authentic work,” I said. And my son said, “You know, Dad, you used to go to a library to find the book you needed for an assignment, and somebody might have hidden the book. You’d hunt for it; you’d spend hours. And you were just trying to collect information before synthesizing it. Do you think it was a good use of your time? In our generation, we understand the responsibility, but the thought of wasting days bringing information together is not a good use of anybody’s time.”

I respected that. And I never asked them about it again.

As for what they should study, I was discussing just that recently with a group of other CEOs in the context of how AI impacts that decision today. If I remember correctly, AI will have the equivalent of something like five million PhDs’ worth of intelligence in a few months. So, maybe students should be focusing more on philosophy, geography, or history. Or creative pursuits. Areas that require reflective thinking and critical reasoning. If we assume that people have less and less time to interact, you must be an even better human in those moments you do have. The piece is going to be the next gold rush, more than the IQ piece. I’m sure that will stir a bit of a debate, but I think there’s something to it.

Explore a career with us