New research from Microsoft reveals that peer influence is the strongest driver of AI adoption in the workplace, surpassing even leadership mandates and formal training. The study highlights the importance of social learning and trusted colleague interactions in encouraging employees to experiment with and integrate AI tools into their daily workflows, while leadership communication has no direct effect on workers’ AI use after controlling for peer influence.
Leaders face a cruel paradox in AI adoption. They’ve done what they’re supposed to do—deployed the right tools, invested in training, embedded AI into everyday workflows—yet real adoption remains uneven and hard to scale.
The playbook that worked for past technology waves—top‑down messaging, broad access, formal training—is no longer enough. Generative AI changes the equation. Unlike prior workplace technologies, it doesn’t arrive with predefined workflows. Leaders don’t know exactly how generative AI should be used in every role, leaving employees to redesign their work in real time. That shift introduces friction. In a climate of economic uncertainty and anxiety about job security, many employees choose caution over visible experimentation. Learning becomes private. Adoption stalls. What separates organizations where AI takes hold from those where it doesn’t isn’t better infrastructure or leadership mandates—it’s peer influence. Employees who see trusted colleagues experiment with AI, adapt it to real roles, and share what works are far more likely to use AI frequently, use agents to automate workflows, and pass on that knowledge. Peer‑to‑peer learning does what formal training alone cannot: it provides persuasive evidence that AI is safe, relevant, and usable in real roles. The Research We are researchers at Microsoft who study how AI is reshaping work. Through our studies and ongoing conversations with companies navigating AI adoption, we repeatedly heard the same concern: despite access and encouragement, AI use wasn’t spreading as rapidly as leaders hoped. We wanted to understand why—specifically, how social dynamics inside organizations shape employees’ decisions to use AI. To do that, in July 2025, we surveyed 557 U.S.-based information workers employed at large companies across a range of roles and industries. All respondents used AI, but in very different ways. Some used it infrequently—to draft an email or summarize a meeting—while others had integrated multiple AI tools into their daily workflows. Some experimented alone; others worked in teams where AI use was shared and discussed. We measured AI adoption in three ways: how frequently people reported using AI, whether they had built or used an AI agent, and whether they had ever shown peers how they use AI. Our team also borrowed from decades of scholarship into the organizational adoption of new technologies by many researchers across disciplines, which pointed to five social variables we expected to work together to drive AI adoption: Organizational culture: How innovative and risk-tolerant employees report their company is, and how collaborative, stable and secure, employees feel. Leader encouragement: How leaders and senior management help with and encourage AI use. Facilitating conditions: Providing resources, training, and infrastructure for AI use, such as a learning channel. Social capital: How much people have trusted colleagues to whom they can turn within and across their organization. Peer influence: How much colleagues normalize, encourage, and teach one another to use AI. We expected that AI use would be highest when people worked in organizational cultures that felt innovative and secure. In these organizations, we anticipated that leaders would encourage AI use and create the facilitating conditions to drive adoption. Safe and secure organizations would also be those in which employees would have more trusted connections to one another , which would enable peers to have greater influence on each others’ AI use. Peer-to-Peer Learning Increases Adoption When we held other factors constant, a one standard deviation increase in positive peer influence was associated with an 8.9 percentage point increase in the likelihood of being a heavy AI user—defined as using at least one generative AI tool multiple times per day. Facilitating conditions are a close second; a one standard deviation increase in that index raised the likelihood of being a heavy AI user by 8.5 percentage points. The effect of peers was even stronger for advanced use: the same increase in peer influence corresponded to a 10.4 percentage point increase in the probability of using an AI agent, compared to 6.1 percentage points for facilitating conditions, even after accounting for overall AI usage. Leadership communication had no direct effect on workers’ AI use after controlling for peer influence and other factors, but leaders still play an important indirect role. Leader encouragement is the most important predictor of whether workers report being influenced by their peers about AI. Quantitatively, a one standard deviation increase in our leader encouragement score was associated with a 0.38 standard deviation increase in peer influence. Peer influence also rose when employees had more social capital—trusted colleagues they can learn from. Both of these social factors have stronger relationships with peer-to-peer learning than facilitating conditions like formal training, although those matter, too. Social capital, in turn, was highest in organizations employees perceive as innovative and secure; a one standard deviation increase in this organizational culture index was associated with a 0.51 standard deviation increase in social capital. Fear had its own influence on AI use, and it wasn’t good. Employees who feared falling behind were less likely to be heavy users or to experiment with agents. This shows how important it is to ensure people feel psychologically safe in their AI journey. Without shared norms about what AI use is acceptable or visible peer examples of AI use, fear translates into caution and suppresses adoption. The qualitative data reinforced these patterns. When we asked employees how informal interactions shaped their AI use, the contrast between high and low users was striking. Eighty eight percent of responses from employees in the top quartile of AI use described peers as influential—often citing concrete examples—compared to just 50% among those in the bottom quartile. Among lower AI‑use employees, AI was often absent from informal conversation. 12% reported never discussing AI with colleagues , and they were far more likely to say, “the subject of AI has literally never come up in informal conversation with my colleagues” or “informal conversations at my work don’t influence me when it comes to AI tools . Why do these informal peer interactions, sometimes in meetings but often in lunch breaks, coffee rooms, and group chats have so much influence? As we write in our research paper, peers offer what one survey respondent called “real time reviews from trusted sources.” People are learning practical how-to tips that speed up their role-specific work from their peers. It’s often peers who share guardrail advice, encouraging quality controls like fact checks, tone passes, and source validation. It’s peers who make it okay to try something, fail, and help others not make that mistake. It’s peers who provide on-ramps for people who are wary or lack skills. Perhaps above all, it is peers whose success stories spark curiosity, increase comfort and enthusiasm for using AI, and normalize its use. As one heavy AI user described, “hearing another coworker telling me how he used an AI program to help find inconsistency in our inventory and how easily it helped him instead of having to spend hours searching was a huge incentive for me to really buckle down and start learning and liking AI programs.” This kind of informal bottoms-up learning environment builds on itself. The more peers feel safe sharing and collaborating with AI, the more they teach each other new ways to use AI, and the more their attitudes change and adoption spreads. Quantitatively, a one standard deviation increase in peer influence was associated with a 13.7 percentage point increase in the likelihood that employees reported teaching useful AI techniques to colleagues—even after controlling for their own AI usage. How Leaders Improve Peer Influence While top-down mandates around AI adoption don’t yield the same powerful results as peer-to-peer learning, leaders play an important role setting the stage for this learning to happen. They create the conditions for peer learning. Leadership communication may have had no direct effect on workers’ AI use after controlling for other factors—but that doesn’t mean leaders are off the hook. Leaders steer organizational culture, influencing employees’ potential to develop social capital. Effective leaders shape whether peer influence can take hold at all. By fostering innovative, psychologically safe cultures, leadership creates permission structures that make sharing AI use acceptable and even encouraged. The way leaders talk to workers about AI also matters because it sets the tone. Employees who rarely used AI often described leadership communication as restrictive or transactional—lists of approved tools, mandates, or links to self paced training modules. Effective leaders take a different approach. They are consistent in encouraging AI use. They set expectations and follow through. While only 3% of respondents who rarely used AI used the word “consistent” in describing their leaders’ communication, 11% of heavier AI users did. They create and support multiple means for employees to learn about and discuss AI use together. “There are many channels that encourage the use of AI in my organization,” said one respondent, in a reply that captured the range of ways leaders did this. “All levels are encouraged to explore ways to apply the tools… There are seminars, group meetings, and frequent newsletter type communications… There are also channels and teams dedicated to the effort to support any parties interested in learning or exploring more.” They model and teach their own AI use. Leaders do this by using it in meetings and sharing their own experiences—both successful and unsuccessful. “The more I see my leaders do it,” said one respondent, “the more I know it’s ok for me to do so.” Seventeen percent of heavy AI users described leadership influence through modeling, while only 12% of lighter users did. Across our qualitative responses, employees repeatedly said that seeing a leader demonstrate a specific, real use of AI made its use feel acceptable. They also carve out time to discuss AI on a regular basis. One employee, for example, described his leadership as “consistently sharing examples of how they are using Copilot” and having “a standing 15 minutes during monthly staff meetings to review new Copilot prompts and results.” They invite employees to share their experiences and reward them when they use AI effectively. Leaders don’t have to be the experts. Sometimes they simply aren’t sure themselves what to do with AI. Instead of seeing this as a negative, leaders can use this as an opportunity to open up the conversation about AI. It may be like it was for the employee who said, “it is we of the lower ranks that have been helping some of our top brass/leaders get fully on board with using AI tools.” Having gotten there, the “leaders have become more enthusiastic and have had the chance to learn more. We receive a lot of positive feedback for the ways we’ve learned to incorporate even the most limited amounts of AI.” . . . The real barrier to AI adoption is not training gaps, a lack of leadership clarity, or even fear—it’s invisibility. When learning stays private, employees don’t see peers using AI in real roles—sharing practical lessons, missteps, and outcomes—and they’re far less likely to become heavy users. The leadership takeaway is simple: you can’t scale AI by urging adoption—you scale it by making learning visible. When experimentation is seen, shared, and socially safe, fear loses its power and adoption follows. The authors wish to acknowledge Syboney Biwa and Lacey Rosedale for their assistance in developing the survey.
AI Adoption Peer Influence Generative AI Workplace Technology Organizational Culture
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Acting LAUSD superintendent gives first public comments, says ‘teaching and learning will continue’Mariana Dale covers early childhood for the LAist and KPCC newsroom.
Read more »
Ohio’s Escaped Convict Alert Program takes effect, sending alerts when felons escape custodyA new Ohio law designed to notify residents when a felon escapes custody took effect Monday.
Read more »
Mets Pitcher Freddy Peralta Reveals Emotions After Learning He Was TradedThe New York Mets know they have an ace in Freddy Peralta, but the veteran revealed how emotional he was when the news broke.
Read more »
Scientists use 270-year-old physics effect to boost cathode in sodium-ion batteriesResearchers used Leidenfrost effect to devise sodium-ion batteries cathodes to ensure it remains stable for 10,000 charge-discharge cycles.
Read more »
Beyond the Boss: How to foster peer justice in your organization.Peer justice plays a crucial role in high-performing teams, as fairness among coworkers drives collaboration, innovation, and overall workplace success.
Read more »
Peer Influence Can Make or Break Your AI RolloutThe real barrier to AI adoption is not training gaps, a lack of leadership clarity, or even fear—it’s invisibility.
Read more »
