Many corporate generative AI initiatives are underperforming, leading employees to secretly use personal AI tools for work. This 'shadow AI economy' signals unmet demand. Instead of restricting usage, organizations should embrace this trend by empowering employees, fostering knowledge sharing, and strategically integrating AI as a collaborative assistant, as demonstrated by BBVA's successful approach.
Many corporate gen AI programs fail. They yield clunky tools, slow rollouts, and unimpressive results. Meanwhile, a hidden revolution is taking place inside most large organizations. Employees, frustrated by cumbersome corporate tools or lack of access, are quietly using personal ChatGPT, Claude, and other consumer AI models on the side, often without telling IT or compliance.
An official at a large central bank reported to one of us that when their employees are working on their secure, no-AI, bank-issued PCs, they often have their personal laptops open to the home page of their favorite large language model. The scale of this “shadow AI economy” is large. A recent report found that although only 40% of companies had purchased official LLM subscriptions, employees from more than 90% of those companies reported regular personal AI use for work tasks. In many organizations, employees use LLMs multiple times daily while their company’s official AI initiatives remain stuck in “pilot purgatory.” Most companies see this as a risk and respond with restrictions, monitoring, and gatekeeping. But that strategy, based on distrust, is a mistake. The shadow AI economy is not a threat but a symptom that tells you there is demand and potential for productivity inside your organization. The strategic response should be to harness that demand and potential and put it to work for you at scale. In this article, we’ll provide guidance on how to do just that, using the example of BBVA, one of Europe’s largest banks, which has a presence in 25 countries, with about 125,000 workers and more than 77 million clients. What BBVA Did Differently BBVA’s approach rested on three principles. First, treat AI as an assistant to help people, not replace them. Second, give employees autonomy to innovate but with clear responsibility for the results. Third, and most important, build a network of internal “Champions” and expert “Wizards” to spread knowledge and solve problems peer-to-peer. This approach turned individual ingenuity into a company-wide advantage. The bank moved early. In April of 2024, it reached an agreement with OpenAI and deployed ChatGPT Enterprise in a secured, exclusive cloud for the whole company. The strategic decision was clear: It was more dangerous to have unmanaged, hidden AI usage than to rapidly deploy a managed, secure solution that aligned with existing needs. This required unequivocal top management commitment. BBVA’s top leaders compressed the typically arduous processes of risk assessment, legal review, and GDPR compliance into just two months, freeing up a dedicated team of senior managers with full-time focus, authority, and minimal red tape. This rapid response was possible because BBVA had invested heavily in data capabilities since 2017, with its data organization reporting directly to the CEO. But speed was only the first move. What made BBVA’s rollout distinctive was how it scaled adopting a philosophy of “seduction” rather than imposition. Initially, only 3,000 licenses were distributed. These were allocated to business-area leaders , who were explicitly instructed to distribute them to the most motivated and committed individuals on their teams, not necessarily the most senior—with a “use it or lose it” policy that meant low users would risk losing their licenses to eager colleagues. This created genuine demand. Active usage was defined not just by frequency but by contribution: Those who created and shared custom GPTs, user-built ChatGPT configurations with set instructions and tools, were prioritized. Demand for licenses quickly exceeded supply, turning the enterprise tool from a mandate into a privilege. The defining architectural choice was the “Adoption Network,” a structured system that formalized the informal peer-to-peer learning that already existed in the shadow AI economy. About 25 Champions and more than 100 Co-Champions handled strategic diffusion and resource allocation. About 200 Wizards served as local experts, providing peer-to-peer support and identifying high-value use cases. Their incentives were partly intrinsic and partly reputational . The central nervous system of the entire effort was a Community of Practice that served as a dynamic horizontal platform where people could share knowledge, troubleshoot, and discover innovations from other departments. It became the most active forum in the company’s history. This networked approach fundamentally shifted the responsibility for innovation. Instead of relying on the core IT and operations functions, the organization empowered the people who understood business processes best to design the solutions. With clear incentives to share, it quickly became a race to the top, as experts from different corners of the world exchanged ideas and best practices they might never have surfaced otherwise. Training mattered too, but BBVA targeted it strategically. The bank recognized that organizational change often stalls at the management level. Middle and senior managers may be skeptical, risk-averse, or fearful that reorganization threatens their power. So before asking managers to sponsor change, BBVA ran a mandatory five-hour workshop for its 250 most senior managers, including the CEO. This was not a theoretical overview. It demonstrated how gen AI could assist with complex, nuanced management tasks, preparing investor days, and giving feedback to collaborators. By showing managers what it could do for them immediately in their own daily work, it shifted the perception of AI from a risky abstraction to a valuable assistant and made them feel they were leading the adoption rather than being swept away by it. To balance control and empowerment, the governance system was built on a core principle: Gen AI is an assistant, not an autonomous agent. Outputs cannot connect directly to automated systems or overwrite core databases without human validation. The employee validates the work, which belongs to the employee in charge of the results. All employees with access must complete training, pass an exam, and sign a procedure document acknowledging their responsibilities. Because centralized risk assessment for thousands of employee-created GPTs is impossible, BBVA developed an automated “Quality Score” evaluating GPTs on guardrails, clarity, specificity, ambiguity, and context as an alternative to the formal risk-admission process that would have killed the initiative. The Results The BBVA system scaled from 3,000 to 11,000 active users in under a year, with 80% of usage coming through direct chat prompting and the remaining 20% through employee-created GPTs. Over 83% of employees now use the system weekly, averaging 50 prompts per week—above comparable enterprise deployments, according to OpenAI. Users report an average time savings of 2–5 hours per week. More than 4,800 custom GPTs have been created internally and are used three times more frequently than the enterprise average. The deepest adoption occurred in internal audit, where management sponsored 100% license coverage. The audit process is intensive in text analysis, data manipulation, and code review, tasks well suited to AI assistance. Supported by a dedicated Adoption Network team, auditors incorporated AI across all their tasks: Structured GPTs produce extensive audit reports with high quality and consistency, and direct prompting supports document summarization and code review. Ninety-nine percent of the company’s 600 auditors worldwide have become active users and now achieve an estimated time savings of 3–4 hours per auditor per week. Similar gains appeared across geographies and functions. In Mexico, an insurance-advisory GPT cut query response time by 92% for 4,400 branch managers, saving an estimated 19 hours per user per week. In Peru, an operations assistant for branch personnel reduced support time by 74%. A candidate report tool for talent acquisition cut report creation from 36 minutes to 11 across 30,000 reports per year. An intra-group invoicing advisor reduced response time by 85% for 1,200 internal accountants. In each case, the tools were built by frontline employees who understood the workflow, not by a central IT team. The Adoption Network’s power to scale decentralized knowledge showed up in unexpected ways. An entrepreneurial engineer in Mexico, for example, developed a GPT for sentiment analysis and shared it with the Community of Practice. The network organized a workshop, shared best practices, and enabled other departments to shift from closed-ended surveys to open-ended questions, capturing richer customer insights. Five Moves for Your Organization The BBVA experience teaches us five concrete lessons for other organizations wishing to benefit from their own shadow AI economies. Make sure the environment is secure quickly. It’s reasonable to expect that your employees are already using AI on the side. Instead of studying the uses that might be useful at individual unit level, construct an interface for the whole company rapidly. This will only happen with a strong mandate from the top. Otherwise, bureaucratic process oversight will slow things down and increase the danger of proprietary data leaking. Unregulated usage “in the dark” is more dangerous than even a second-best solution at the whole-company level. Make access attractive right from the start. Scarcity can help you with this goal. Assign licenses competitively, of example, to make access become a privilege and signal status. Then reassign them periodically, based on individual usage and contribution. Designate power users for each unit. Here the scarce factor is knowledge, not tools. Power users will have means to share best practices. They’ll also lead Communities of Practice, which can diffuse their findings. AI adoption is more powerful when collaboration is widespread and learning is collective. Ensure that managers know what they’re doing. You can’t expect managers to be persuasive in encouraging others to use AI if they’re not good at it themselves. Train them well. At a minimum, they should learn how to write staffing notes, sensitive communications, and KPI reviews with AI help. The workshops should be practical and short, not high-level babble. Understand that this process empowers humans and does not replace them. In whatever you do, there must be a hard human-in-the-loop rule. A human employee should always own the work. There should not be direct writes to core systems. Internal GPTs need quality scores and guardrails. They specify scope and context, include sample tests, data boundaries, and always have a named maintainer. This is simple, scalable, and non-bureaucratic. The Path Forward The AI economy is fundamentally different from previous technical revolutions. Because AI uses natural language and has a low barrier to entry, it enables every member of an organization to become a technical entrepreneur. BBVA understood this early and decided to harness it as a huge internal asset. To do this well, they needed to trust their employees and enable their hidden innovators, who would signal exactly where the demand and potential for productivity exist. Their efforts, as documented in this article, provide a useful model for other companies hoping to make similar moves. And, above all, it makes one lesson abundantly clear: If you want to succeed with enterprise AI, start not by developing a centralized plan but by harnessing the ingenuity of your own people. Note: All BBVA figures come from BBVA’s internal reporting, as of mid-2025. Time-savings figures are self-reported by users.
Generative AI Shadow AI Corporate AI Employee AI Use AI Strategy
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Waco Siege: A Lingering Shadow of a Botched Federal RaidThis article reflects on the 51-day federal standoff at the Branch Davidian compound in Waco, Texas, in 1993, which resulted in the deaths of 76 people, including 25 children. It highlights the event's infamy as a botched raid and features reflections from a former reporter who covered the intensive, long-term media coverage.
Read more »
10 Most Important Crime Movies That Define the GenreRay Liotta, Robert De Niro, and Joe Pesci in shadow on the poster for Goodfellas (1990)
Read more »
Why the Padres Sold for $3.9 Billion—and Why It Won’t Reset MLB’s MarketSeveral unique factors led to the Padres’ sale shattering the previous MLB record. Don’t expect their sale to reset the market.
Read more »
SCOTUS Hit by Bombshell Leak of Secret ‘Shadow Docket’ MemosThe Supreme Court has increasingly relied on ruling on high-stakes policy through the “shadow docket.”
Read more »
The shadow war: Confronting Iran’s playbookIran's playbook mirrors principles outlined in doctrines like unrestricted warfare: Exploit vulnerability without engaging in direct conventional conflict.
Read more »
String of scientist deaths, vanishings fuels expert talks of shadow ops and silenced secrets: 'Very serious'Fox News Channel offers its audiences in-depth news reporting, along with opinion and analysis encompassing the principles of free people, free markets and diversity of thought, as an alternative to the left-of-center offerings of the news marketplace.
Read more »
