Reimagining Work: LinkedIn CEO on Leading Through AI Transformation

Business & Technology News

Reimagining Work: LinkedIn CEO on Leading Through AI Transformation
AILeadershipWorkplace Transformation
  • 📰 HarvardBiz
  • ⏱ Reading Time:
  • 1476 sec. here
  • 31 min. at publisher
  • 📊 Quality Score:
  • News: 611%
  • Publisher: 63%

LinkedIn CEO Ryan Roslansky discusses how to lead through AI transformation, drawing on insights from his new book, 'Open to Work'. He emphasizes the need to rethink team design, talent models, and culture, and shares LinkedIn's approach, including embracing new roles and planning in short cycles.

As AI reshapes the foundations of work, executives face a pivotal question: Will you use this moment to simply optimize the present or to reinvent the future? In this HBR Executive Live, LinkedIn CEO Ryan Roslansky talks to HBR editor at large Adi Ignatius about what it really takes to lead through an AI transformation.

Drawing on insights from his new book, Open to Work: How to Get Ahead in the Age of AI, Roslansky describes how AI implementation demands rethinking team design, talent models, education, and culture. He shares how he’s approaching this leadership challenge at LinkedIn, from embracing a new experimental “builder” role to planning in six-week cycles. His argument: Organizations that treat AI as a catalyst for reimagining work—not just accelerating it—will achieve competitive advantage. AI is reshaping work for everyone. And for senior executives, the question is: How do you ensure AI strengthens your strategy, it elevates your talent, and it compounds your competitive edge? Joining us today is Ryan Roslansky, CEO of LinkedIn, and also executive vice president of both Microsoft Office and Copilot. He is co-author of the new book, Open to Work, How to Get Ahead in the Age of AI. He’ll be drawing on his vantage point at the center of the world’s largest professional network and one of the most data rich platforms in business. And hopefully, he’ll share lessons from LinkedIn’s own transformation, including how the company is using AI to power growth while also trying to maintain trust at scale. So, Ryan, thank you very much for joining us today. Hi, Adi. Thanks for having me. All right, so let’s jump right in. You have a new book. We’ve been through several waves with AI. We’re still wondering, is it overhyped? Is it underhyped? Is it coming for us now? Is it coming for us later? You’ve written a book about AI. What do you see as the context for this moment when your book is coming out? Yeah, I mean, first of all, again, thanks for having me, Adi. And overhyped, underhyped—I honestly think it’s both overhyped and overhyped. And maybe that feels a little bit like I’m hedging. But I actually don’t think, in this case, I am. It’s overhyped in the extremes, these narratives that every job disappears or the idea that AI is just going to solve and fix everything overnight. I think that kind of framing, that feels extreme. I think it pushes people towards fear or denial. And I don’t think that’s super useful. But where it’s underhyped, though, on the flip side, is I think just understanding exactly how AI is going to change the complete structure of work. It’s not just the tasks. It’s not just being able to research or summarize a document. It’s basically how the entire role that anyone does is defined, how an entire team of people that have those roles operate, how companies think about value, how companies think about what their value is in a world of AI. So I think that’s the much bigger shift than many of the headlines are talking about right now. And it’s not this thing that’s happening decades in the future. Obviously, it’s moving right now. I mean, people are using AI to code. There are people are using it to write, to analyze, to plan. And things are moving now. So basically, the reason that Aneesh and I wrote Open to Work is because we’re still in that early window. The outcome, it’s not fixed. We have time to adapt. But we do not have time to ignore it or say it’s going to come later, and we don’t have to worry about it right now. So the book is really about how we respond in a way that expands opportunity instead of shrinking it. So I think there’s a temptation, or maybe the first stage of AI adoption is to use it to streamline, to make more efficient the work that we do and have been doing for a long time. I assume, if the technology proves its worth, it will be about helping us reimagine what we do. So it’s not just an efficiency play, but it’s something more. How do companies think about getting to that something more? Because that involves redoing processes, redoing hiring, redoing the very nature of a firm. What are some good ways to think about, OK, how do I really remake my organization to take advantage of this technology? First of all, you’re completely right. One of the things that I frequently talk about with folks that are in college right now, the younger generation that are trying to figure out what happens in a world of AI, one of the most important things we talk about is, hey, look, some of the problems that you thought were impossible to solve, that were problems happening to the world, like climate change or poverty or certain diseases and healthcare issues—well, with AI, you can expand your thinking now. And you should really aspire and dream that these problems we once thought were not being able to solve, now with AI, you can do that. So I think to your first point, that’s the aspirational view on this that I’m really excited about. The reality though, inside of a company, that’s where I think that the more difficult part of all of this is happening. So I think, as a leader right now, you have to treat what’s happening with AI right now as a shift in how work happens. It’s not a software upgrade. It’s not that we’re buying new tools. I think it’s easy to say—and I’ve done it, we all do it—let’s experiment with AI, and that’s fine. But that’s not what this is about. That’s not the transformation. The harder questions are if AI changes the tasks inside of a role, great, how does that role evolve? And then if a role evolves, how does a set of people that work together in a team evolve? How does team design look, moving forward? If certain tasks change, if certain tasks shrink, if other tasks become available, how does that affect team design? And then I think there’s the human piece. Look, if your employees are hearing AI transformation and immediately think that this is cost-cutting, that’s a problem. Leaders have to be clear about the intent. And so I think that Covid was a leadership test. I think this is a leadership test. Technology can be the catalyst. But really, leadership inside of a company right now is what determines that outcome. This maybe is a leadership question, but address the skeptics. Because I think there are people who maybe were more gung ho about AI initially than they are now. I’m not saying that characterizes everybody, but there are some people who feel like, all right, there’s a lot of workslop. And employees are sensing it or experiencing it. The ROI has been limited so far. You can say it’s early days, but if you’re skeptical, you want to see results. How do you address the skeptics who are saying, yeah, I’m not sure I see this as big a deal as, say, the internet? Yeah, I mean, I think we’re super early on, as usual. And it’s one of those things where, in any technological transformation, you go through this period where people are trying to figure it out, they’re trying the tools. It’s kind of cool, but I’m not quite sure what it’s going to do or how it’s going to affect what I’m doing or my workflow or my day-to-day. And people kind of get in this period of time. And then, all of a sudden, something hits, and you start to see these inflection points. And we’re seeing that a lot. You start to see some of these tools that are able to do more and more. And you start to see a lot of the news cycles around, oh my gosh, now this tool is great, and it can do this. And now everyone’s focused on potentially what they can rebuild and go and do with this new technology inside of their company. So I think that that’s stage that we’re in. My best piece of advice on this is, I don’t think you can ignore it. I don’t think it’s something that you should say isn’t going to happen, or I’m not going to put my time and effort in understanding this. But again, I think that it’s not just a tool problem. And it’s not necessarily that you should think about, am I investing only in bringing some of these AI tools into my company, but more importantly, am I investing in understanding the way that work exists inside of my company. I mean, this is being said all over the place these days. But you take a look back at the invention of electricity, and it’s 30 years later until most factories actually adopt the power of electricity to change the way that they work. And it wasn’t because the technology didn’t exist or wasn’t available. But it’s because you had to redo the entire workflow of how the factory worked to actually wire it for electricity as opposed to steam. Those are tough transitions. Those are leadership tests. Those are human transitions. So I do think that you can’t just say, hey, we brought AI into the company and it’s not quite clear that anything worked. You have to invest in the training and the understanding where ROI potentially could or couldn’t exist. Give it some time to see what happens. Let bottoms-up ideas flow. And the other thing—one of the things I focus a lot on right now is there’s kind of a narrative out there, which is like, oh, people right out of college or entry-level jobs, they may not be as important as they once were. I think it’s really important right now for every leader to be bringing in a lot of these AI-native people right out of college to be showing you how AI should work in your company. That’s the next generation. I think that there’s a lot of value in doing that as well. So it’s really not just about technology and does it work. It’s about can you create the cultural shift right now to leverage the technology to do what your company needs to do moving forward. So let’s talk about opportunities and challenges for workers and leaders. Let’s start with workers. OK, so in the book, you address your readers, saying that nothing, not even AI, can beat you at being you, which is undeniably true. But there’s a difference between being you and being employable. So that’s interesting that you’re talking about bringing in more entry-level people. We saw IBM talk about they’re increasing their entry-level jobs as well. So that’s a thing. But how do you respond to fears—and this could be inside LinkedIn and Microsoft, or more generally—fears that AI is going to take away employment for a lot of people? Yeah, first, I mean, when we talk about that nothing, even AI, can beat you at being you, obviously, that’s not the same as being employable. And I think that’s fair. The market rewards contribution, not just personality for its own sake, obviously. But I think more what I’m pointing to there is that everybody’s job is a set of tasks: your job, Adi, my job, everyone’s job. And if you break down your job into a set of tasks and you start to understand which set of tasks can AI most likely automate and which can it not automate, you start to understand that your job is more than your job title. Now, to be clear, if your job looks like it’s just a set of automatable tasks, you need to be thinking about what that means. And you need to confront that. And you need to be thinking about a new job. But the parts of your job that require judgment or communication or empathy, AI is not good at that. Those things don’t disappear. And in many cases, they become way more central to what your job is. So being you isn’t about ignoring skill development, it’s about being intentional about where you build depth, and again, learning how to use the tools, getting fluent, and then developing those capabilities that are great about you on top of that, the things that aren’t easily reduced down to automatable tasks. The people who do well, like in any transition, they’re not the people that are fighting the tools. They’re the ones that are using them and then doing the unique human things on top of them. So that’s where I really think it’s important that a lot of people are focusing their time and energy right now. I’m actually going to bring in an audience question right now because it’s on point where we are in the conversation. This is from Usman Tariq. So when expertise becomes commoditized, in some ways, in certain areas by AI, does judgment become the real currency? You just mentioned judgment. And if so, how do you prove you have it? How do you prove that you have these skills that will create differentiation in the world going forward? First of all, spot on. Great question. Historically, it’s funny, we talk about soft skills as potentially being not important. I think through the last century of education and the world of work, there’s been such a focus on the harder skills and kind of put in the back the soft skills. But ironically, I think that they’re more important than ever. Similarly, there hasn’t been a lot of coursework, education, thought into how do you actually build a lot of these soft skills. I’m not saying there’s none, but it hasn’t been kind of a focus. You talk to a lot of kids who are going through the education system in the U.S., for example, and not a lot of people are taking classes on judgment or empathy. They’re taking classes on how do I learn how to code or how do I learn various hard skills. And so my guess is that two things are going to have to happen. There’s going to have to be a shift in the way that we think about what education looks around some of these soft skills. There’s a debate about whether or not you are born with something like judgment or you can learn it. And I think that if people start to put a lot of their time and effort in the world of work and education towards what can we do to help people understand and learn things like better judgment and assess it, I think that’s going to have to be the future moving forward. Because, to Usman’s point, that feels like that is the thing that stands out in a world where a lot more of the information or expertise becomes commoditized. Although, on the expertise becoming commoditized point, I do think that the way to think about it is the following way: A lot of these AI models today are really good. And they have access to all the information on the public internet that they can leverage, which are very valuable for a lot of horizontal or broad tasks—summarize a document or help me code something—because again, a lot of that is publicly available. But the reality is a lot of our jobs that happen inside of companies or inside of specific functions or industries have a ton of nuance and customization to it. Even the way that a certain role exists at IBM versus LinkedIn, or even LinkedIn versus Microsoft, has a ton of difference and nuance to it. So I do think, if you think about when we get to these points in the moment here, where we do feel like, oh my gosh, so much expertise is becoming commoditized. We’re at this horizontal layer here. And then, for every kind of, let’s call it “job type” in the world or “basket of tasks,” let’s start to go vertically. And AI is not yet moving up that. And then, remember, the complexity on top of that is that there’s a layer of customization on a per company basis that exists in how things are done, the tools that are used inside of those companies. So there’s a lot for AI to learn. And I think that, right now, it’s kind of nailed that horizontal level. But the next frontier is figuring out really complex how you go up the functional path and then the per company path as well. And those are much more difficult challenges. I want to talk about the role of leadership in this era, CEOs in particular. How do they need to adapt? I talk to AI experts who commonly say that CEOs talk a lot about AI, they talk about it with their clients, they talk about it with their boards, they don’t really seem to understand it. I don’t know if that’s fair. I don’t know if that’s still fair, but it’s certainly a trope. How do senior executives need to adapt to this era to make sure their companies don’t end up falling behind? I don’t think it’s unlike any other leadership challenge. First and foremost, you have to get insanely educated on the problem—like, insanely educated. I have the fortunate ability to kind of—LinkedIn is part of Microsoft—and to kind of sit in the staff meetings of Satya Nadella, getting a first row view of what’s happening in the world of AI. But for anyone that doesn’t have that luxury, you have to find the ways and the people and the connections to really educate yourself. You have to use the tools as well, use them to understand how to do your job or not do your job. What works? What doesn’t? Form an opinion. And it’s not that you have to say that all these tools are great or not. But oh my gosh, I understand how this works. I understand what information inside of my company can or cannot be leveraged by these tools. I understand the insane compliance risk that exists if some of these tools come into my company and have access to my data in a nonstructured or regulated way. You have to ingrain yourself in the complexity of this problem. And then, from there, like anything else, you have to create a plan for what that means for your company. And at least I can explain—I think everything’s different, but let me explain what at least I’ve learned a lot from LinkedIn specifically. We build software. We build an internet, large-scale, consumer platform. And the core functions in our company are engineering, product management, design. And inside of those, there are various different breakdowns of those functions as well. The strategic advantage of a consumer in a company like LinkedIn is how quickly can we build and test new things. That’s it, understanding that maybe we have 100 ideas, and we’re going to end up throwing 99 of them out once we actually put them in front of users. So then the question is, the complexity and the time that it takes for all of those functions to go through existing silos and work together like we have historically doesn’t put us in the right position to succeed. So what about if now we’re able to leverage AI tools that can break down those tasks in a much more seamless way? We can’t, moving forward, think about, hey, you’re just a designer or you’re just a product manager or you’re just an engineer if you have at your fingertips the tools to do all of those things. So then the question—OK, that’s easy for a CEO to get up and say, yay, like this. And then it’s like everyone looking at us, like, wait, what are you talking about? So then the question is how do you impart change upon that? What we decided to do was we created an entirely new role at the company. It’s called a builder. And literally, it’s a combination of what has historically been all of those tasks and all of those roles inside of the company. Then we decided to recruit for that role, literally, not in any kind of historic way that you’d recruit for a role, but basically, hey, if you want to apply to be a builder at LinkedIn, send us an example of what you’ve built with AI. That’s it. I don’t care where you’ve gone to school, if you have a degree, who you know, where you used to work. Can you use these tools to build something? We’re not replacing the existing functions at LinkedIn, but we’re bringing in a new group of people that are trying something new. We’re on month six of this kind of new function in the company. And we’ve got a group of remarkable builders, most of them just right out of college, that are teaching our company how to build moving forward and how to think differently. And so, at least for the strategic way that I’m trying to—like, how do you culturally impart change, how do you bring something new to the company? That’s one way to do it. And I think that, again, just having a plan, trying things out, like always, is the most important thing. But this isn’t like a “I’m going to delegate this to my CTO” type of problem. This is a spot on, number one, CEO problem to figure out right now. We talked about individuals. We talked about leaders. You’re starting to get into actually another question that’s come in from Anbu. And it’s the question of what do teams look like in the future? You talked about some of these roles—engineers, product owners, designers, now builders. What does a team look like. Is it similar to what we have now but they’re builders there? Or is it completely different how we think about teams getting things done within our companies? Another great question. So the second thing that we’ve done inside of that builder framework is that instead of this idea that, hey, for the next year or n years, you are locked in and focused on this part of the product or this part of the business, we plan much differently. We don’t plan in annual cycles, we plan in six-week cycles in most cases. And we bring together what we call “squads” of people that, for the next six weeks, are focused on this specific problem, because with an ability to build much quicker, try things at a pace that I never thought possible, we’re able to try new things at different cadences. But it requires a different planning process, a different team process. And then, after that six weeks is over, people potentially move to an entirely different squad of the new problems that we now are able to tackle inside of that space. And the teams are much smaller because they can move much quicker with the tools that they have. There’s not less people. There’s still the same amount of people. But they’re organized differently, in a much more flexible way, based on how quickly the market is moving, with the right tools, and then not locked into one existing way of working or one project. So another thing we’re testing. In some cases, it’s working great. And in other cases, it’s a challenge to get it right. But again, I think the worst thing right now is just kind of standing still and operating the way that you always have. You have a great, I think, clarifying observation in the book where you say “The old ways of work were not designed to unleash human potential. They were designed for industrial efficiency, for speed, for scale.” So the old world is coming apart, right? Among other things, that suggests the education process needs to change. And we’ve all thought a lot about an industrial system that, in the U.S. at least, was created for the industrial era, not the current era. How do you think about how education needs to change? And that could be institutional, but it could be also within companies, in terms of reskilling. How do we impart the education that has to take place for this new era? First of all, I’ll go back to the really thoughtful question before about new skills matter a lot, judgment, empathy, et cetera. So, I mean, first and foremost, the understanding that to be effective moving forward, there’s a different skill basket that matters more is, most importantly, I think, one thing for everyone to comprehend. One of the questions I’m asked most about education is it needs to change quickly and can it change quickly? And I don’t think that the entire education system can pivot overnight. That’s just the reality. But it doesn’t need to completely reinvent itself to move in the right direction. To your point, the industrial model assumes stability, which is basically you went through some sort of college or training, you train for a specific job, and then you largely stay in that for your entire career. And obviously, that assumption doesn’t hold the same way anymore. And quite frankly, it hasn’t—this isn’t even AI related. So if you look at all the data that we see on LinkedIn, even over the past eight years—if you take the same exact role, like eight years ago, even versus 2024, the average set of skills to do that role have changed by 25%, which means that even if you’re not changing your job, your job is changing on you. It’s not even an AI thing. That’s just, in general, roles change over time. I think there’s a lot of angst and anxiety right now because it feels like they’re changing faster than ever, or they’re perceived to be through AI. But I think it’s important to realize that roles have changed over time. And the education to get there obviously needs to do that as well. So I do think that more of an emphasis on the human skills, more of an emphasis on AI literacy, more skills-based hiring, the thing I talked about with our product builder program. Again, it’s not about where you went to school or who you know or your last company. It’s: Do you have the skills to build things using these tools? Those are cultural changes. And I think the more that we get that embedded into the entire ecosystem, that learning continues well beyond what we’ve thought about as former college and former schooling, we’re going to start to see that shift. So accepting this adaptability right now, that’s part of the job description. And it won’t be perfect in three years or five years. But I think you probably know well, it’s like the education system doesn’t need to be perfect to still make a pretty important difference. So that’s where I’d focus at. Predicting the future is a fool’s game. But, let’s say five years from now, what will separate the companies that got this transition right versus those who didn’t? What’s at stake now in terms of taking the right steps in terms of being AI comfortable, AI fluent, AI adoptive? What’s at stake looking five years ahead? I’m definitely more optimistic on this. I think the companies that do get this right, which starts with culturally, like what does our company look like moving forward? We talk about culture at LinkedIn as the collective personality of our organization. It’s who we are or who we aspire to be. And I think you need to start by aspiring to be—or at least, we need to start by aspiring to be a company who culturally, pragmatically understands how to leverage the tools and the fast-moving nature of AI to create a company that is able to create way more value than we are today by getting this right, by getting the processes right, by getting the tools right, by getting the roles right. And so, number one, I just think that it starts intentionally, saying that, hey, we are going to get this right, we have a plan. To your point, you can’t predict the future. Well, you can’t predict the future, and the technology is moving at a pace that no one truly understands. You have to be adaptable and nimble in that strategy. But just starting with kind of putting the company in that direction, bringing everyone along in that direction. This can’t be something that happens in a silo inside of an executive team. We have company all-hands every two weeks at LinkedIn. And one of the most important things that we try and do frequently is hack days or bring people up and talk about, wow, like, here’s some new insight that I found about how we can use AI to make LinkedIn a better company. So set these processes up, think through it that way, be adaptable. And then, again, think about the amazing value that you can create if you get this right, not only the existing markets or TAMs that you think you can compete in, but the new ones that are going to be created through all of this as well. So just get in the game and be a great leader. I want to bring in one more question from the audience. This is from Zach. And the question is, how do you effectively drive bottom-up momentum in AI adoption in the face of workers’ fears of job elimination and uncertainty? How do you drive that bottom-up momentum in terms of adopting AI? Zach, I’m going to give you a super tactical and technical answer in a second. But it’s the exact right question to ask. I think so many people have learned this lesson the hard way, leaders, which is like, you get up and you’re like, “Hey everyone, go adopt AI. Yay, go for it.” And it’s like, you kind of look like a fool if you say that. And then, everyone is like, what are you talking about. So I think that most importantly, and obviously, this is not an ad for the Microsoft products, although I fundamentally believe that they’re critical in this, which is that inside of your company, you have to have a strong AI foundation if you actually want to get up in front of the company and say something like that. Because what you quickly learn is that if you hand tools to people inside of your company that can help them do their jobs better, unless those AI tools have access to the information inside of your company to do their jobs better, they’re stuck. And then what happens is they start copying and pasting a bunch of your sensitive information into these models. And that is the worst possible thing that can happen for your company. So you need to start with a foundation that is compliant, that is secure, that you can trust to allow these tools to flourish on top of. Once you allow that, which says, OK, you can actually be compliant and have access to what you need to do your job better. And there’s a ton of fun things that you can be doing on top of that. Again, like I said, every two weeks, we get up, and in front of the entire company, people line up to say, “Hey, I want to show you the thing that I’ve built, the hack that I’ve made, the cool new thing that I’ve unlocked, that I think everyone should try this as well.” It’s not like the CEO being like, “Hey, everyone, go try this new AI tool.” It’s actual builders that are trying things and finding unlocks inside of the company are the ones that are evangelizing it. And I think that there’s real cultural momentum that happens from there. But again, unless you’re on a strong, stable, compliant, secure foundation to do this, it’s actually potentially fairly dangerous for the company. People will not be able to do the things that they need to do. And then, it just kind of sets it up for failure in the end. So both technically and culturally, you have to get both of those things right.

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

HarvardBiz /  🏆 310. in US

AI Leadership Workplace Transformation Linkedin Innovation

 

United States Latest News, United States Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

Greg Greeley named new CEO of Simon & SchusterGreg Greeley named new CEO of Simon & SchusterSimon & Schuster has named a former Amazon.com executive, Greg Greeley, as its new CEO. The publisher announced Monday that Greeley’s appointment is effectively immediately. The 62-year-old Greeley succeeds Jonathan Karp, who announced last year that he was stepping down. Greeley has a background in business and investment.
Read more »

Simon & Schuster names former Amazon executive Greg Greeley as CEO, succeeding Jonathan KarpSimon & Schuster names former Amazon executive Greg Greeley as CEO, succeeding Jonathan KarpSimon & Schuster has named a former Amazon.
Read more »

AT&T CEO questioned 'effective governance' of DallasAT&T CEO questioned 'effective governance' of DallasAs early as May 2025, AT&T’s exit from downtown Dallas appeared likely.
Read more »

Anthropic CEO Raises Unsettling Possibility About AI: ‘20% Probability’Anthropic CEO Raises Unsettling Possibility About AI: ‘20% Probability’Anthropic's CEO raised an unsettling possibility about AI in a recent interview.
Read more »

Livvy Dunne admits to missing boyfriend Paul Skenes' electric Team USA performance: 'Ya girl had work'Livvy Dunne admits to missing boyfriend Paul Skenes' electric Team USA performance: 'Ya girl had work'Paul Skene’s biggest fan was conspicuously quiet during his four-inning appearance for the United States during Monday’s World Baseball Classic showdown with Mexico, and now we know why.
Read more »

Jay Graber Is Leaving Her Role as CEO of BlueskyJay Graber Is Leaving Her Role as CEO of BlueskyGraber is shifting to a chief innovation officer role.
Read more »



Render Time: 2026-04-01 09:19:56