AI and Humane Leadership: A Davos Discussion

United States News News

AI and Humane Leadership: A Davos Discussion
United States Latest News,United States Headlines
  • 📰 HarvardBiz
  • ⏱ Reading Time:
  • 2905 sec. here
  • 50 min. at publisher
  • 📊 Quality Score:
  • News: 1165%
  • Publisher: 63%

How should AI and human leadership evolve together in this moment of rapid transformation?

How should AI and human leadership evolve together in this moment of rapid transformation? At Davos, HBR Executive and our partner Egon Zehnder convened a select group of global chief executives to discuss this very question.

Moderated by Adi Ignatius, HBR’s editor at large, the conversation included a robust debate about whether AI is taking jobs or transforming them, as well as an exploration of how AI can coexist with human-centered leadership—strengthening trust, empathy, judgment, and innovation amid accelerating technological change. The panel includes: Olivier Blum, CEO, Schneider Electric Erik Brynjolfsson, director, The Stanford Digital Economy Lab Dan Schulman, CEO, Verizon Communications Anish Shah, CEO, Mahindra Group Brad Smith, president, Microsoft Renate Wagner, board member, Allianz SE You can watch the full video above and the full transcript is below. . . . We’re hoping that today’s discussion is not at 30,000 feet, but is kind of in the workplace. And that we’re going to be talking about AI as a leadership challenge. We’re a few years into absorbing AI, since the launch of generative AI, so we’re starting to see what works and what doesn’t work. Some amazing efficiencies, but also what an author has coined “workslop.” AI is producing stuff that we’re not always so proud of. We launched into investments. But there’s some impatience now with the return on investment. So there’s pressure to adapt, to adopt, but it’s unclear how. So I guess my message, or what I’m trying to cut through, we know that leaders need to act boldly here, but not recklessly. We know we need to adapt AI, but without losing the humanity that defines the companies that we work for. So that’s kind of my setup. And I want to bring everyone into the conversation initially. And then we can have more of a freeform conversation. Then we will turn it to you for your thoughts and ideas. But Brad, I want to start with you. You probably usually do think at that 30,000-foot altitude. But if you’re willing to come down to the workplace. We’re a couple of years into this. I don’t know where we are in the hype cycle. But we’re starting to see what it means to have AI in the workplace, what it means to manage humans and bots. So from your perspective at Microsoft, the clients that you see, what are you seeing? How are you seeing this play out? Brad Smith, president, Microsoft: Let me take one specific piece of this and then maybe talk about how it might be more generalizable. It might be something that Dan will have a perspective on. Let me start with a particular field where AI seems to be having one of its biggest early impacts: coding. You’re probably constantly hearing the truth that AI is very good at writing code. So if you are at a company like Microsoft, your product is typically code. We’ve said that we’re using AI to write 30% of our code. The job family of people who write code in a software company are typically called software engineers or SWEs. You hear about SWEs all the time. You might think that if it’s so good at writing code, we would say: Wow, we don’t need as many SWEs. You might see downward pressure on the compensation levels of SWEs. And instead, I go to meeting after meeting, and it’s the opposite. And so the real question it raises is, what’s happening and why? Well, the interesting thing about software in terms of how we created it at Microsoft is it reflects something that I think has been generally true in the world of work for the last four decades, namely, increasing specialization. So there are software designers. There are program managers. There are product managers. There are engineers who write code. But what is happening now is that all of that is being brought together in what we now call a full-stack engineer, a person who does all of those things. How is that possible? Well, it’s because they’re spending less time writing code. And as they spend less time going deep on a particular function, it is creating an opportunity for them to go broad and integrate these different parts of the software development process together. It’s asking people to be more creative. We actually are taking software engineers and not just having a family of jobs that creates the first-party products for Microsoft. But we’re now having what we call forward-deployed engineers, engineers who are basically in the offices, oftentimes of our customers, helping them create products. So the real question I think this raises is whether this is something that may reverse four decades, where we saw jobs move from generalists to specialists—and not just in software, but in finance, in medicine, in law, in all of the fields that increasingly require people to spend so much time staying on the cutting edge of understanding so many detailed pieces of information and going deeper into more complex—your word quite rightly—specific processes, and whether it creates a resurgence of opportunities for people to go broad. And what that will mean for jobs? I think it’s great myself, to be honest. What do you do when you become more senior? You spend more time going broad, looking for patterns, identifying opportunities, being more creative. It has huge implications, I think, for entry-level work and how we train people for a different way of working. That, to me, symbolizes or illustrates, I think, one of the fundamental questions for all of us. This is still early days. How is this going to change work? As work changes, what does that mean for all of us who have jobs? Thank you for that. Thank you for teeing everything up. You know, I interviewed the CEO of McKinsey recently, and he’s not the only person who would have said this. But they talked about their hiring process in the age of AI and that they’ve been undervaluing, basically, liberal arts majors, that the old model of specialization may be giving way to something. We want people who can think, because the machines can do some of the specialized work. So Dan, you were just referenced, so I’ll bring you in. You’ve said that maybe in a couple of years, we will have human-level AI. I guess I’m interested, if that’s your personal thought and maybe your vision and maybe being built into your strategy, how do you think about transforming the company, transforming the workforce? How are humans and AI interacting now? And how do you prepare for the future? Dan Schulman, CEO, Verizon Communications: The way I think of the job of a leader is to do three things. One, define reality as best they can, which is really important, because it’s very hard to hear for most people. And it can be depressing, depending on how you’re doing. But defining reality is really important. Then you’ve got to paint a vision of where you want to take the company. And then you have to establish kind of a path between that reality and that vision. I think the reality is that we are living in a world right now that is going through unbelievable change. And change is really hard for people. Like, if you win the lottery, which you have to say is good change, I think a lot of people get divorced after that. I mean, there’s turmoil through change. And I think—and Brad will know this as well as me, maybe probably better—we’re going through three kind of simultaneous waves of technology. We’re going through AI. And I do believe we get to AGI in the next two to four years. And all my conversations this week here at Davos with the leaders of the various communities, the latest is five years from now. And when I say AGI, it’s like machines doing everything we can do, but better. So that’s how I define it. We’re on the cusp and the precipice of things that are going to change humanity, fully change humanity. And the models we have today are the worst models we’ll ever have in our lives. Anthropic is going to do eight releases of their model this year. OpenAI is going to do a number. Google is going to do a number. And every time they do another model, it’s a step function improvement. So a year from now, the models are going to move from being, I think, great assistance to us to beginning to replace us. I think that will happen in programming and different parts of it. I agree with Brad. Most people conflate software engineering with everything that a software programmer does, and that’s not right. But I think you’re going to start to see replacement, and then you’re going to start to see a lot of replacement going on. So, I think it’s possible that we see 20% to 30% unemployment levels over the next two to five years. I think that’s possible. I also think we’re going to have quantum coming in at, call it 400 to maybe a million error-free qubits in the next five years. Quantum is like 1,000 times more powerful than today’s compute. That’s going to hit. And then you’ll have humanoid robotics in seven to eight years as well. So I think if you think about all these technological things going on and the impact on the workforce, I think it’s huge. Now, by the way, I also think there are amazing things that are going to happen with all this—amazing things in material science, in our health, our longevity. Ideas are going to blossom, and maybe there’ll be a bunch of new jobs that happen as a result of that. But I think the timing of all of this is such that there’s going to be displacement, and then hopefully, maybe, something that comes out on the other side. But I think we need to prepare our workforce for that. And what I say inside Verizon is: We live in the age of AI right now. You might love it, or you might hate it, but we live in the age of AI. It’s like what Shakespeare said: Nothing is or is not, but thinking makes it so. We as CEOs who run our companies, we need to think about how do we use the toolsets going forward to stay competitive? Because if we don’t do it, our competitors will. And so it’s going to be a massive transformational time over the next three to five years. Thank you for that. Jamie Dimon yesterday said AI is moving too fast for society. If we’re not careful, we’re going to face unrest, which may or may not be true. It depends on how we handle that. Eric Brynjolfsson has joined us. He’s a senior fellow at the Stanford Institute for Human-Centered AI. He will be known as the man who displaced Francesco. And Eric, I’ll bring you into the conversation in a second. But let me turn this to Renate, because we sort of teed up this question of humans and agents interacting. And we’re already building up this workforce that integrates. I’m interested in your view: How should leaders think about talent, about hiring, about retention, about motivation in this workforce that brings in so much kind of AI firepower and increasingly AI colleagues? Renate Wagner, board member, Allianz SE Thank you very much for having me here and for the opportunity to speak. Well, I think to start with the question about talent, I believe you need to start straight at what Dan said. As a leader, you need to understand what are you trying to achieve with AI? And how can AI help you to remain competitive in this somehow turbulent world? And I think to me, that is different for each company, obviously, because the starting points are different. You said it: You need to understand in depth what gives you competitive advantage. For us, for example, at Allianz, we are an insurance company. And as such, we are a data-and information-driven company. So for us, AI is a lot more than just the tool which we deploy to derive marginally better productivity. For us, AI is really a catalyst that has the potential to substantially transform our business model. I think that is the starting point for the question around leadership and talent and what talent I need for the future. And to illustrate that—and I think you said it—you need to really go deep and understand concrete examples. So I’ll give you a concrete example for Allianz where we already used this to transform the business entirely. My example is pet insurance. You know, in Germany, we have 26 million cats and dogs and horses. Many of these dogs and cats are insured with us, with Allianz. Now, you know, there is a saying in Germany that the last child has fur. It’s a humorous way of saying we have an aging society. But now imagine the following scenario: After a very busy weekend, very busy vet weekend, we get thousands of invoices for treating cats, you know, with a stomach problem, dogs with an ear infection, etc. Now, all of these invoices per se are worth maybe, I don’t know, on average a few hundred dollars. But some complex treatments can go up to 80,000. And what we have done is we used AI to completely rethink this pet-insurance process within Allianz. In the past, our pet owners had to wait for 21 days to receive the reimbursement of these invoices. With this rethinking of the entire process, two thirds of these are now reimbursed within four hours. You can imagine what this has done for our customer satisfaction. I mean, it has moved on a five scale from 4.2 for pet insurance to 4.8, 4.9. But it doesn’t stop there. So it’s not only that we automated this, we rethought the whole process entirely. But it also made us much more effective. And again, another anecdote, you know, very often if you go with a cat for a treatment to the doctor, the vet would also do the nail cutting. Now, nail cutting of cats is not covered by insurance. The AI applies these rules consistently. So for us, it even meant we reduced cost. So what I want to say is it’s these tiny examples that show how you can radically rethink processes not only to become much faster, but also much more effective and really improve the customer satisfaction. Now, back to your question about leadership. What does this now mean for talent? As you have seen, this was an entire business transformation. And the business transformation obviously starts with the people transformation. There is this formula 10/20/70, which says 10% is the algorithm, 20% is the technology and the processes, but 70% is the people, the people transformation. In my mind and in my experience, actually using this illustrative example, there were three things that were particularly important on this journey. The first one was training. Making sure we take people along, making sure we equip them with the knowledge, how to use these tools, how to really see the power of these tools was absolutely crucial. I mean, there is this saying: A fool with the tool is still a fool. I completely agree with that. And so we put a lot of effort into this. Last year at Allianz, we spent 100 million euros on training our employees and each of our 160,000 employees worldwide learned 63 hours. But the important thing is more than 30% of that was learning on AI and data excellence, because to me they go hand in hand. Garbage in, garbage out. The second element in addition to training was recruiting and you alluded to it. Bringing in the skills and the knowledge that you need is key. So for us, this started straight with making sure, you know, the people whom we recruit—and we recruit 20,000 people a year. Those job postings last year, 80% of them included AI skills. So bringing in the right skills is a second key lever. And the last one I would say is the leaders themselves. They are the real catalysts for this business transformation. And it starts straight with creating an environment where people dare to experiment, where you have the psychological safety to really, well, drive experiments, be encouraged to drive experiments and making sure that the leaders themselves do the training and they have what I would call an AI-first mindset to really make sure they don’t think marginally or incrementally, but they think out of the box. And in terms of AI, to your point, we encourage our leaders not to think what can AI do for us, but what can AI not do for us, to really think out of the box. So I think in my mind, it starts straight there and then it’s about walking the talk, because leaders will lose credibility if they don’t really live as role models. So Olivier, let me bring you in because I’d love a sort of a CEO perspective now. I mean, my sense is that the companies and CEOs were plunging sort of aggressively into AI, certainly for a while. We got to be there. We need to experiment all over the place, lots of pilots. And that maybe now is a period where there’s a little more reflection about, well, what is the ROI? And how does this fit our strategy? Are we just doing AI? So I guess I’d be interested, the experimentation that you’ve done at Schneider Electric, where are you on that journey? And are you at a different stage in terms of how you think about AI and its potential? Olivier Blum, CEO, Schneider Electric: Well, look, we are definitely at a different stage. Does it mean that I know what is exactly the ROI today? No. But does it mean that I’m absolutely convinced that it’s transforming Schneider Electric end to end? The answer is yes. You know, it was very interesting journey for me because I started three, four years ago to get a bit closer to AI, not because we are using AI, but because we are working with company like Microsoft or Nvidia. We are seeing we need very different type of infrastructure, very different type of data center in the future, you know? And that’s how we entered. And because all those tech companies are fantastic also on the commercial side, they were telling me: Olivier, you should use more AI. And every time I was trying to sell more to Satya Nadella, he was telling me: You should look at AI, it will transform your company. And I remember I went to my executive committee and I started to ask the question: How much are you using AI in your life? It was three years ago, and they were all using AI. And then there was this time where I said: Hey, let’s spend one hour every month when we speak together to see how we are using AI. And there was this fascinating moment for me when you do the performance review appraisal, you all do that in your company. We have a traditional way to do it. We had someone say: Hey, AI is fantastic, because with ChatGPT, employees can prepare their performance review. They don’t have to feed everything in the system. And then you know what? The leader can also prepare on the other side. And I said, it’s ridiculous. The way we have designed the process for AGs is completely transformed and impacted because we are misusing completely. So just to give you that example, but more important, we, that day, I said, we need to manage very, very differently the company, and we need to test AI everywhere we can. We are an energy tech company, so our job is to make energy more efficient. So let me give you a concrete example. Our job is to make sure that factory works, data center works, you have access to energy. But you know that your equipment are aging at one point of time. We are lucky because since 10 years, all those equipment are connected. But we are not able to do much with the connectivity. We are trying to build software to help our customer, but it’s painful, it’s complicated. But we’ve been able, really, by structuring properly the data, to extract the data, to be able to do preventive maintenance, to manage the obsolescence. And I like to come back on your question, which is, does it reduce the workforce or whatever? There is a shortage of electrician everywhere in the world. All the people are facing an issue on how to do maintenance in their facility. So actually, we have been able to solve a problem for our customer, which is to understand the aging of the equipment, to do preventive maintenance remotely without sending people. At a point of time where you have a shortage of workers, it’s a beautiful business case. Now, it doesn’t mean that it works everywhere in our company, for all the application for our customer. Doesn’t mean it work on all our processes internally. But at one point of time, we discover that if we are targeting very good user case, that can transform the way we are managing the company. And I just bring to the last part, for the leadership is exceptional because I’ve started to use it a lot at the beginning myself. Does it mean that you replace a CEO? No, you still have a CEO. It just make me more efficient. But I’ve been always fascinated. You know, I have the privilege—I don’t know if it’s a privilege—I’ve been working 32 years in the same company. I was even the HR of the company, the head of HR. And I was trying to understand why are we so inefficient? Why do people are complaining in multinational? But you realize that there was probably 30% of the time of white collars, which is inefficient today. Imagine that you can bring technology that can make the people more efficient in their day-to-day job, in the way you do performance review, in the way you do forecasting, in the way you manage everything in the company. So that’s my view. It’s more at that point of time. So I’m moved to the second stage where I’m 100% convinced that it will make Schneider Electric more effective. It will deliver more value for our customer. Am I able to prove the ROI? The answer is no. But from speaking at that point of time, that’s what my obsession because I see how it’s impacting the company and the way we are managing the company. So I’m fully in, and because it brings results to the company, it brings more efficiency to our people. So we are fully in at that point of time. I have to say my hope, I talk to a lot of CEOs who sometimes say: I can’t get anything done because our companies have become so complex. The structure, it’s matrixed, and I have legal and compliance, and I just can’t get anything done, which is ludicrous, obviously, for a leader to say. But I guess my hope is that AI somehow creates efficiencies in that process that allows us to lead the companies, the complex companies that we’ve created, and doesn’t add simply more complexity. But Anish, you’ve done a lot of experimenting with AI within Mahindra. I’d love to hear not just, well, here’s a shining success story, but if you’re willing to talk about a shining success story, but even some of the challenges that you’re facing, or just tell us what you’re learning from the experimentation, and kind of what’s working, what’s not working. Anish Shah, CEO, Mahindra Group: Good morning, Adi. It’s a pleasure to be here. Just as context, the Mahindra Group operates in 20 industries that contribute to about 70% of India’s GDP and operates across 100 countries. So we’ve seen AI across many different industries, and the biggest challenge we’ve seen is that this is very different from past technologies, where in the past, our tech teams could come in, or our tech partners could come in and say: Here’s how we can help you improve what you’re doing. In this case, it has to be the business teams, the process owners, that have to partner very closely with the tech teams. And there is a fundamental question in that, which is, if this is successful, it is very likely that half the process owner’s team is gone. So how do you solve for that? That is one of our biggest challenges in terms of getting everyone on board to say that this is important, because there’s a lot of passive resistance that can come in otherwise. And that’s something that we’ve tried to work with everyone to get them on board, especially as we think about how industries and businesses are going to get transformed. So getting the buy-in first is critical. The approach we have is, in the face of uncertainty, let’s accelerate further, because we have to be leaders. If you’re leaders, we can deal with uncertainty better. And if we deal with uncertainty better, then that allows us to be bigger. It will create more jobs, and give comfort, a psychological safety to our teams that you’re not going to be impacted. That is one thing that has helped. Things that haven’t worked for us are doing things that are subscale, and especially the word used, “experimentation.” People love experiments, people love pilots. But the challenge is: How do you go from pilot to scale? And how do you make that happen? Because that takes leadership, that takes courage from someone to say: I’m going to stick my neck out, and it may not work. So how do you also create the culture to say that that’s fine if that happens? We’ll try something else after that. We always focused on capital allocation and return on investment. But as a leader, I’ve also had to transition from that to say it’s not always about ROI. It’s also about sometimes doing things that may not always work, looking at other benefits that come in besides the pure return from investment, and thinking about the broader vision about what are we trying to achieve here? If you’re looking at achieving leadership in an industry, you have to be a tech leader. And some of it is the cost of becoming a tech leader along the way. So we are learning. Yes, there are many success stories. And the one that I would just highlight is the ability to create more uptime in a plant. In our auto business, we’ve got about 10 to 15% more uptime because of maintenance, preventive maintenance. We’ve quadrupled capacity in our auto business in the last four years, but we are still out of capacity. And we have waiting lists for our customers. So that 10% goes directly to our bottom line as well. So there is significant returns that we are seeing from this technology. But I think the bigger challenges are around the people aspects around it. And how do you get everyone to buy in that this is going to be better for us? Not that this is going to end up reducing 30% of the workforce. And that is a commitment we are making to our team saying: We are going to grow. And as we grow, we’re not going to reduce workforces. We’re going to create new opportunities for everyone. Eric, we’ve been talking, sort of about our own company’s experience. You know, what I love about you is you know the technology, but you also have the data. So I’m really interested, you know, if you take a snapshot now of the workplace, like what are we seeing? What’s working? What’s not working? And where does that take us? Erik Brynjolfsson, director, The Stanford Digital Economy Lab: Sure. So I agree with what I’ve heard from Anish and Dan and others about the enormous revolutionary potential of this technology. The reality is that the top-line numbers, we’re not seeing that much happening at the aggregate U.S. economy in terms of productivity, in terms of employment, etc. However, if you dig in under the surface, there’s a lot of heterogeneity. There are a few companies, maybe about 10% of the companies that really are making a big impact. And there’s a few applications and a few occupations where you’re seeing some significant impact. So let me start first with the productivity side. We did, I think, a careful study where we were able to get causal estimates comparing folks who are using LLMs to help with customer service and those who are not. There was a clear about 15% productivity gain in that group within just a few months. Some of the specific employees were getting 35, 40, 50% increase in productivity. And so that heterogeneity was very important. Others had a basically zero productivity. And the paper describes some of the characteristics of them. On the employment, I’ve updated because we initially didn’t see much. And even when we looked at some aggregate data and sliced it a few different ways, we didn’t see it. But ADP, the world’s largest payroll processor, shared some data. We were able to slice it more in more detail. And if you specifically rank all of the occupations in the United States based on what their exposure—we have an exposure index that one of my co-founders at Workhelix and I developed these techniques for breaking occupations into tasks and then you can evaluate each task. And when you do that, ranking them all, what you find is the most exposed occupations in the United States. And for the youngest age group, you start seeing some very noticeable effects on employment. It’s actually kind of a twist. There are some groups that have growing employment, some that have falling employment. Folks in customer service, where we saw a lot of effects, in software engineering ,and parts of management and sales, we saw about—well it was originally about a 13% decline in employment for people age 22 to 26 years old. Older people, we had steady employment. Least exposed occupation, we had somewhat growing employment. Furthermore, I think one of the most striking things was you could look at how they were using the technology. And some of the folks who were using it mainly to augment their work, to learn new things. Some of them were using it to automate and replace tasks. There was a very different employment effect in those two groups. The folks who were using it to augment had growing employment. And the folks who were using it to automate, which was the majority, were having a somewhat falling employment in those categories. That paper, Canaries in a Coal Mine, came out about three months ago. We just ran some new analyses, and now that 13% is about 16%. So whatever the effect is, it’s growing over time. Now I have to say, unlike the productivity study, that was really not a causal study. We couldn’t expose half the U.S. workforce to LLMs and half not. So it’s observational. There may be other things going on. But we did look at whether or not remote work, interest rates, some of the tech overhanging could explain it, and it couldn’t. So whatever it is, it’s very correlated with these greater use of the technology. So some of the revolution that we may see happening in the next few years, we’re beginning to see that the first glimmers of that, that fact that it grew from 13% to 16% just in the two or three months between when we did the different waves, suggests that it could get a lot bigger in the next few months. The final thing I’ll say is that when you go within firms, you see just a tremendous amount of heterogeneity. And this is where my company, Workhelix, I’ve just learned a lot from doing that. Every company has lots of people using the technology in different ways. In every one that we’ve looked at, it ends up looking a lot like a power law. So you see a few super users who are just crushing it. You’ve heard of 10x coders. Well, there’s 10x customer service agents. There’s 10x salespeople. There’s 10x managers. Even 100x people who are using the technology have spin up 100 instances of agents doing all sorts of work. Very few, 1%, 2%, 3% of the folks are doing that. Then there’s this long tail of most of the people in most organizations that aren’t really using the technology typically all that effectively. That heterogeneity is a huge opportunity, actually. So once you can identify what those successful users are doing and then share that, not all of it, but much of it can be replicated by other people in the organization. So it’s a way of getting out of the pilot purgatory that we heard a little bit about of the difficulties of going to scaling things up. You identify what’s really working effectively already in the organization. When someone sees that there’s someone else in the company that’s doing it, it adds a lot more credibility than just the CEO saying: Hey, everyone needs to use AI more. So that gives me a lot of optimism that we’re gonna be able to level up much more quickly than some of the earlier revolutions that took years or decades to play out if we can transfer that expertise that the workers inside the organization already have to the rest of the organization. Wagner: Well, absolutely. I think that to me resonates a lot. And one of the key components that we found with our workforce that was a big challenge and actually a big opportunity as well is the question of trust. So driving AI adoption in an organization for piloting, but then also for scaling up, the key barrier to that is trust. And if people feel that the organization uses AI in a responsible manner, I think the openness to adopt it is much, much, much bigger. And if you look at the latest Edelman Trust Barometer, it clearly shows that, that lack of trust is the biggest barrier. And I can speak for Germany, for example. In Germany, according to Edelman, 57% of people who are low users of AI say that the main reason is the lack of trust. So in other words, in my mind, it is absolutely fundamental to think through, to be very transparent about how to use it, to be clear about data governance, to be clear about what can AI do, what can’t AI do, because I think that’s a big lever to take people along. So I wanna try something. The panelists have sort of sat here quietly as the others have spoken and probably are dying to respond to something that came up. So instead of throwing jump ball questions, if anybody wants to jump in and kind of elaborate or make a counterpoint on anything, Brad, go ahead. Smith: Let me offer one thing, because if you listen to all of us and what we’re talking about, I do think, broadly speaking, AI gets applied in the workplace in two distinct ways. One is to transform specific business processes, like insurance for pets. And it’s the latest version of business transformation. And there’s, at this point, both sort of an art and science that organizations have developed about how to go about business transformation. And I think you all have captured some of what you need to do to do that well with AI. The second is just providing a tool that anybody can use. And I think that is a distinct opportunity. And there it’s a matter of not just providing people with the tool, but providing them with tips about how to use the tool, encouraging them to experiment, share what they’re learning, in some ways, modeling and inspiring and rewarding people who are finding ways to use it as a tool and do their jobs better. I’ll just give one example. Every Wednesday, I send out to 2,000 people the Copilot tip of the week. It’s a tip on how to use M365 Copilot. And I look for opportunities any week where I have time to try out the tip before I use it. And there was about a month ago where the co-pilot tip of the week was: Hey, if you’re writing for busy readers, as most people are, here’s a prompt. Take your draft and rewrite it using the six principles from this book, Writing for Busy Readers. And I said, you know what? I got a long email from somebody this morning. So before I sent this out to you, I applied that prompt to that email. Wow, it was a lot easier for me to read it. I sent it out to 2,000 people. About 10 minutes later, I got an email back from somebody who said, “I fear that that was my email.” I said: “You’re right. This is a great tip for you too.” But it just starts with just recognizing it’s two very different scenarios, I think, to think about. Blum: Maybe I’ll jump in. For us, it’s really fascinating because we’ve been obsessed at Schneider about one thing, how you make energy more efficient. And we’ve been trying to understand on the technology side what will help us to save energy, to better consume energy. If you looked at our presentation 10 years ago, we said that would be about electrification, automation, and digitalization. And with the latest technology that we are experimenting, that the first time probably in the history of the company that I start to see the opportunity to reach the ultimate goal. I’ll give you an example. Take your home, all of you in your home, you have an electrical panel. It’s not a connected panel. You’ve been trying some home automation stuff or whatever, it does not work. We are probably a couple of years away from a stage where your electrical panel could be connected, we could automate, we could apply technology, we could apply an agent that will help you to reduce 20 to 30% of your energy consumption without knowing it. We don’t do it today. We don’t do it because the technology was extremely complicated to implement. It was extremely complicated for the user. Our electrician, we are not able to implement that technology. So just to give you an example, where technology could bring progress for all. So that’s why I’m quite optimist. Now, I’m not a naive person, and I agree with what have been said. Our job in the company is to make sure people understand how it works. So we are training all our employees in the company. But at the same time, and I think you said it, sometimes you have to force transformation in the company because that’s not the thing people will embrace by themselves. So it’s a kind of push and pull effect. But I think if we are embracing the technology, if we are controlling the impact of the technology, I believe those technology can bring progress for all. And that can help to solve big problems we have all to face in the world. Shah: I would jump in with two points here. First, agreeing with Olivier that there are lots of different applications, and second is empowerment helps find them. So let me expand a little on the energy point. As part of our sustainability efforts many years ago, we had signed up for what was called EP100, Energy Productivity 100. With AI over the last couple of years, we’ve been able to accelerate that to a point where we’ve reached our goal of EP100 five years before we had planned for. And EP100 effectively means that we can produce the same number of SUVs or the same number of tractors with half the energy. So now think about this from a sustainability standpoint. You don’t need to worry about where the energy is coming from. You just need half the energy to do the same number of SUVs and tractors. We’ve achieved that already. The second aspect is as you empower people, they come up with different ideas. We’ve got paint shops looking at AI to see how we can improve the quality of paint. Yes, it saves cost, but the car looks better because it’s very uniform across. Welding, you wouldn’t have thought AI helps with welding, but welding for us was a testing process that was destructive and you would test samples. Now AI does 100% welding tests that creates a car that makes less noise and it’s better put together. So many such examples, but that empowerment is what is essential in having people come up with different ideas. Schulman: I would say in my view, what makes this really hard is there’s a lot of contrasting things that are going on. You talk about trust. The real truth is it’s so dynamic and we are at a precipice right now that if you say to your employees, there’s not gonna be any job disruption, I think you lose all credibility because all of them get it that there’s going to be. And I think as you’re looking at statistics right now, it’s really early on and a year from now, it’s gonna be radically different. It’s gonna be very hard for people graduating from colleges to come in to companies because a lot of the entry-level stuff will be automated. I think the biggest impediment to a company’s future success, especially larger companies, is its past success. Like you tend to wanna look backwards at what made you successful and then try to repeat that a little bit more efficiently every year. It’s like 10%, like we’ll cut costs by like 4%, maybe grow our revenues by 6% or whatever so we can grow our EPS by 10%. These are technologies that are gonna be—we have to think about, you may not get there—like 10X type of thinking. Like how do we completely re-imagine, to your point, about what your value proposition is gonna look like, how your workforce is going to look. And as a result of all that, the cultural element of this— the tech, in my estimation, is going to be there. We are gonna be at AGI and you have to get it in your heart and your soul that that is fundamentally different than anything we’ve ever experienced. Smith: Dan, what is AGI though? Define what AGI is. Schulman: AGI is where machines can do most anything we do better than we can do it. Smith: And I have to admit, I just fundamentally disagree. Schulman: I know you do. Smith: Because—no, and it goes to the heart of this. Can people get better? Can technology be a platform that enables people to get better? Let’s just say we’re in a constant race between humans and machines. If humans, if we’re just gonna say today, the best we are today is the best we’re ever gonna be, then computers will outpace us. But if every time a machine gets better, a human can use that machine to get better, then I will argue that in many areas, machines will never catch up. It’s all about whether we can inspire—I mean, you talk about leadership. I appreciate the point about realism, but my goodness, are we not going to use as employers, as leaders, technology as tools to help our employees get better themselves? Schulman: Of course we are. And they’re gonna do that in their personal life and in their business life. For those who are still there, they’re going to be way better and more. But, Brad, I mean, machines learn at rates that are so—like, I don’t know why I’m telling you this, because you’re like—machines learn at rates that are so much faster than we are. Like, our inputs and outputs are so much slower. When I look at what machines can do right now, like, just, like, I can write my wife the most beautiful poem on her birthday right now. It takes about five seconds, and then I readjust it. And we’re just beginning. And yes, that’s made me a better person, but it’s not gonna replace, like, somebody that can automate coding. Smith: Let me challenge you on that, though. Schulman: And do paralegal work in, like, five seconds, which would take, like, 20 hours. Smith: No, Dan, let me just, let me call you on this, because this, you have just illustrated an example. You said that a machine can write a better poem for your wife than you can. I don’t know, what kind of husband does your wife want? Does she want to know what you think, or what you can ask a machine to say she wants to hear? It all goes to the definition of what we want people to be. Schulman: I prompt it, but it’s way better than me. Okay, okay. Eric, jump in and bring us home. Brynjolfsson: Look, there’s no question that AI is vastly superior to us on so many different dimensions. There’s also no question that it’s a very jagged frontier, and that it will be for a long time, that there are things that people do better and people that humans do better, and that’s why humans and machines together can usually do something that a machine alone can’t do. And let me be more concrete about that jagged frontier, not just say, you know, in generalities. For most of human history, you can divide tasks into three big parts: defining the right question, whether or not you need to write a poem for your wife and what things you want to say in that; executing it and writing a very beautiful poem; evaluating it, going back and say, “Wait a minute, okay, that part, I don’t think I want to include in there.” And we go through all the steps. Now, humans used to have to do all three parts. Now machines, as Dan pointed out, are getting incredibly good at doing a really good job at the execution for more and more tasks. That puts the comparative advantage of humans is more on defining what is the question, where do we want to point this powerful tool, and how do we evaluate, is this really what we want to achieve? And it actually becomes a loop. And so all of our organizations are increasingly becoming organizations filled with people who use the technology to execute—and that’s a super important part of it. But these other components are in some ways even more important. And I think there’s all these CEOs on the panel here, except for me. But I think all your organizations are gonna be filled with CEOs of fleets of agents. And those agents are gonna carry out the instructions, answer the questions that your employees are telling them. They’ll have 10, 100, some of my students have hundreds of agents working for them in parallel on different kinds of problems. And maybe CEO isn’t quite the right word. That’s an executive officer. I’m trying to coin a new term, chief question officer, CQO, because I think that’s increasingly gonna be the leverage point. If anything, as the tools get more powerful, our ability to ask the right questions will become increasingly the key differentiator and the key important thing for us to do. Smith: But I will say, you just defined the essence of how people can use the tool to make themselves better. That, to me, is the key. Brynjolfsson: Absolutely. And it goes back a little bit to the early data, and we’ll see how it plays out. The folks who are using the technology to augment what they’re doing seem to do better than the people who are looking purely just to replace what they were doing. Schulman: But that’s the beauty of AI. You can use it, and you can get better. But I’ve just—and I know, Brad, you don’t like to hear these things because you’re creating AI—but go function by function by function inside a corporation, like customer service. Eighty percent of customer service is repetitive questions, calling in, I forgot my password, what’s my account balance? Like, you don’t need humans for that going forward, right? You just don’t. Like, software programming parts of it, you’re going to eliminate a lot of software programmers or that part of their job. Legal. I don’t really need a paralegal to go out now and spend hundreds of dollars an hour when one of these machines can answer that question in a second, like coming back. You know, if I go into marketing, like marketing, I’m going to need less people. It’s like clear to me. So, I just don’t see how it’s possible that levels of people inside a company don’t come down. Okay, this is fascinating, and I kind of want to just let you guys go at it, but we have a lot of really interesting and smart people in the audience, too, so I want to get that conversation going. If I call on you, wait for the microphone, because this is all being recorded, but Rich Lesser, the chairman of BCG, I saw you sort of had your hand up, so if we could get Rich a mic in the first row, please. Rich Lesser, global chair, BCG: Great, thanks, Adi. This has just been a great conversation and an authentic one about just how much is coming. There would be so much to pick up on. There were just two points that struck me in this, one of which is very related to this Davos. The first point is something we had just published in our research that all of you have brought to light, which is just in the last year, there’s been a dramatic rise in the way CEOs realize how important their role is to the way AI is going to get used in their organization, and the role of leadership. And yes, we need great technology teams to support this, but that this is going to be increasingly driven by CEOs. The stat went from well under 50% to 72% in one year of CEOs who say they are the ultimate decision maker on AI. And Brad, you brought that to life in talking about how tools get deployed, that you can’t just have the tech team put it out. It needs role models and examples and leadership. Renate, you brought it out in new business models and creating an entirely new business model. That’s not going to come from the tech team. That’s going to come from someone who understands the pet world and how we will use pets. And Anish and Dan and Olivier all talked about it in the context of process changes and the way workflows need to be redesigned. And I think that sense that it is leaders in the organizations who are going to own this is dramatically different than a year ago. And we see that in the leading CEOs are now spending well over eight hours a week working on AI. I’m not talking about the tech community. I’m talking about the rest of the world are spending well over eight hours a week thinking about AI. The second thing, which is this conversation at the end, no one knows exactly how this plays out. But I think if you ask many people about this, Davos, they would say CEOs are still doing a lot on different fronts, climate sustainability being one, but they’re talking much less. And the CEOs need to, quote unquote, stay in their lane, be more quiet, use their voice very carefully. And in some of, I think that’s right. And I do think there is something about staying in the things you know best. But the amount of societal change that’s coming and the amount of change inside organizations that’s coming, I think that that voice of a CEO—that won’t all be aligned, that will have very different perspectives on the impact and how they intend to use it in their own organizations and what it means for society. I think the expectation of staying quiet in the years ahead is going to get much harder because the impacts are gonna be felt much more fundamentally, whether it’s in one department in a company, across a whole company, or in changing expectations for jobs for young people. And I think it would be the wrong message for CEOs to take away from this, and senior leaders, of which we have many in this room, that it’ll be possible or appropriate or consistent with what you’re trying to do to motivate and engage people to be too quiet about what are these impacts, how are we trying to address them? And those two things came out beautifully in the comments that you’ve all made in the last hour. Just quickly, Rich, you didn’t mention me, but I think I’m the chief question officer. I like that. Schulman: Rich, I totally agree with what you just said. It’s why I’m pushing hard on, like, scenario—like, it’s the future, so nobody knows what it is. But I think the most likely scenario is that unemployment goes up. And I think that as a result of that, as CEOs, we need to think about what does reskilling look like? How do we work with the public sector to think about what does that mean if unemployment goes up? What does it mean if we have 15% or 20% unemployment in a democracy? What happens as a result of that? And so I think, right, the reason I’m pushing so hard on, like, back on, like, nothing’s gonna happen is because I think things are going to happen, and I think we need to take responsibility for that scenario. It may not play out, but we need to be thinking about it because it’s likely. Smith: Let me, if I could, I mean, first of all, Dan, I agree with you on—I’m not saying that AI will not be better than anyone doing anything. Where I disagree is the notion that it might be better than everyone doing everything. But go back to Rich’s comment about addressing societal change, but also the pressure that I think now exists to, quote, stay in your lane. Okay, everybody here who is a leader of a firm hires people. I think one of the elephants in the room right now is the impact on entry-level work. It is the difficulty that the last year’s and this next year’s college graduates are finding in getting a job. Let’s first hypothesize, because I think there’s some support for it, that, you know, one of the pressures on entry-level jobs is what you do in your first year of something actually can be done pretty darn well by AI. Maybe it can be done better than by AI. So, what are people doing? They’re hiring fewer new college grads. But then let’s think more broadly about sort of how, about the history of how people have been developed in different fields. Apprenticeships have been a common aspect of many fields. They still are, for example, of skilled labor and the like. We have an opportunity—I might even go as far as to say we have a responsibility—to think about whether we should change entry-level work, that maybe we ought to be thinking a little bit about that apprenticeship model, about bringing people in, having them do three or four different jobs in rotation over the first six to 24 months, not because they are going to be able to keep up with AI, but if we think that way, this might be the best time in any of our careers to hire the best young people we’ll ever meet because our competitors are not thinking this way. And we might have the best young people three years out of college if we capitalize on this opportunity. I think if you take the topic of this session, it’s just that opportunity to think differently. And then, by the way, Rich, I think apropos your point, that is a way for us to do something that is good for our companies, good for young people, and then use that as a platform to talk and think more broadly. Blum: If you’ll allow me, I’d like to expand on what you said. You mentioned about one elephant in the room. The second elephant in the room is geopolitics. Why companies have a huge responsibility today? We are multinational. We have people everywhere in the world. You take our company—but I’m sure it’s the same for everyone here— U.S. is our largest country. China is number one. India is number three. Middle East number four. We are a European company by origin. At this point of time, we have a massive responsibility because we are facing two major acceleration, the acceleration of technology, the acceleration of fragmentation. And in the case of Schneider, it’s acceleration of the new energy landscape. When you are facing this kind of situation, if companies as multinationals, as leaders, don’t manage to find a way to keep people united on one side because we are living on the same planet and making sure we manage the risk of the technology we are talking about, I don’t think we are doing our job. And since it’s difficult for politicians to find agreement, I think more than ever, I think company, multinational, CEO in particular, but not only, have a massive responsibility today. And I think that’s why this topic is very interesting today because I do believe it go beyond technology. It’s a question of geopolitics and how we want to manage tomorrow. Shah: Two perspectives. One is technology has changed lots of jobs over time. Many of us have grown up in a world where there were no computers. If you look at the iPhone or any phone today, you could find a hundred devices this has replaced. What happened to all those jobs, service jobs? AI is accelerating this. And the question is, how do we react to it? I completely agree with Rich that CEOs have to think broader. If we do reduce cost a lot, it’s going to create more profits for us, but profits are temporary if you don’t think of it in a broader way. Purpose has to come first. And how do we think of purpose from the standpoint of employees in our companies as well as broader societies? How do we create greater opportunities for them that serve our customers better that are not linked exactly to what they do today? How do we reskill folks and help societies develop and grow as well? I think that is a broader role that CEOs have to play today. So we’ve got just a few more minutes. You know, I’d love to get a question or a comment from someone else in the audience if there’s a, yep. And it has, unfortunately it has to be brief because we’re going to have to stop soon, but yeah, please, and identify yourself. Sanjeev Krishnan, lead, PWC in India: So, you know, all of you have done a brilliant job on AI and brilliant conversation, but I’m looking at a less mature maybe organization, right? And just putting ourselves in the shoes of a less mature organization. With so much disruption, what AI can potentially do, but what is the foundation that they must set for AI, for the success of AI in their organization? Because without setting the right culture and the change management, and you said, Dan, that, you know, our previous success could be the biggest problem of us creating incremental value using AI. How would, let’s say somebody which is not so much on the maturity curve, how would a business like that adopt AI and create success? You are all leading very mature organizations. What about somebody who’s less mature than you? Schulman: I think in many ways you have an advantage. Honestly, the hardest thing that we’re all fighting inside our companies is culture, like inertia. This is the way we always used to do it. Wait a minute, this is scary to make. If I were like an entrepreneur starting right now—it’s the best time to be an entrepreneur. Your cost structure is going to be way lower. You can like now talk into a machine and have like an app created for you. I mean, to me, like it’s an incredibly exciting time. I think one thing that I can see happening is like a Cambrian explosion of entrepreneurship because like this idea of ideas and everything—like it’s a great time to be an entrepreneur. It’s a great time to be a less mature company. Brynjolfsson: Yeah, I agree with the Cambrian explosion on the entrepreneurship. But bigger companies, I think the framework that Brad laid out earlier about the two ways that you can use it to affect specific tasks that people are doing or you can do the reorganization. But both of them start with this, I think this task-based analysis, it gives you visibility. And it’s the same principle, whether you’re a mature company or a less experienced company. Once you get visibility into the tasks that people are doing, it starts becoming much more visible how AI can affect—you know, we have a taxonomy of a couple hundred thousand different tasks, which ones can do—and you get a roadmap of what’s happening. And then you get the visibility in your organization. Today, AI has made it possible to see a lot more of what’s happening in the company. And there are these super users that I described before in your organization already. And making that information available allows you to not only have other people doing what they’re doing, but also gives you the sort of the raw material for a bigger transformation. Olivier, you have the final word. Blum: I just wanted to say, give the mic to your employees. You realize that your employees usually they know much better what to do than your leaders. And that’s the way you change because they are using AI every day. Most of them, they have started to use AI every day. And they are the one facing difficulties. Give the mic and give them the opportunity to express what could be the best user case for the company.

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

HarvardBiz /  🏆 310. in US

 

United States Latest News, United States Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

Mark Carney stands firm on Davos speech, plans new Canada trade dealsMark Carney stands firm on Davos speech, plans new Canada trade dealsCanadian Prime Minister Mark Carney says told U.S. President Donald Trump that he meant what he said in his speech at Davos, and he plans diversify away from the United States with a dozen new trade deals. Carney rolled his eyes when asked about when asked about U.S.
Read more »

Davos Debate: Is AI Taking Jobs—or Transforming Them?Davos Debate: Is AI Taking Jobs—or Transforming Them?At an HBR-moderated Davos panel, business leaders clashed over whether AI will hollow out the workforce or redefine human value at work.
Read more »

Wanted for War Crimes, Netanyahu Skips Signing of 'Board of Peace' Charter at DavosWanted for War Crimes, Netanyahu Skips Signing of 'Board of Peace' Charter at DavosStephen Prager is a staff writer for Common Dreams.
Read more »

AI Independence And Other Imperatives: Mark Carney At DavosAI Independence And Other Imperatives: Mark Carney At DavosA world leader takes the stage to reveal the realities in geopolitics today.
Read more »

Pierce County animal rescue sees rise in pet surrenders after fee hike at humane societyPierce County animal rescue sees rise in pet surrenders after fee hike at humane societyAn animal rescue based in Pierce County is seeing an increase in pets surrendered by their owners.
Read more »

Davos 2026: Setting The Tone & Spirit Right, Quite LiterallyDavos 2026: Setting The Tone & Spirit Right, Quite LiterallyAs we wrap up Davos 2026, three themes stood out with AI taking the front seat, energy transition standing firm and most importantly cooperation in this fractured world.
Read more »



Render Time: 2026-04-01 01:22:03