OpenAI and Anthropic spent more on lobbying in 2025 than ever. They’re pushing for friendly treatment amid tensions—at least for Anthropic—with the Trump administration.
OpenAI and Anthropic spent more on lobbying in 2025 than ever. They’re pushing for friendly treatment and less regulation amid rising tensions—at least for Anthropic—with the Trump administration.OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei avoid holding hands at the AI Impact Summit in New Delhi on February 19, 2026.
ast month, Anthropic CEO Dario Amodei published a sprawling essay arguing that AI data center proliferation is increasingly tethering the financial and technological interests of major AI labs to the government’s political ones. He lamented what he described as tech’s reluctance to challenge the government, as well as the government’s support for “extreme anti-regulatory policies on AI.” His prescription: put policy over politics.by Axios—between Anthropic and the U.S. Department of Defense over a $200 million contract to develop AI for national security across warfighting and enterprise use cases—suggests the separation for which Amodei is advocating may be difficult to achieve in President Trump’s coin-operated administration. And the Anthropic CEO obviously knows it, which is why his company is stepping up lobbying and political donations just like its rivals. Anthropic and OpenAI spent $3.13 million and $2.99 million, respectively, on direct federal lobbying in 2025: more than ever, according to regulatory disclosures. That’s on top of approximately $300,000 per company to lobby in California. The two companies, reportedly valued at $830 billion andLast week, Anthropic announced a $20 million donation to Public First Action, a political organization advocating for more AI regulation. The company touted the effort as a non-partisan one, even as many of the policies it supports seem to be at odds with those of Trump, AI czar David Sacks and the general industry’s anti-regulation bent. OpenAI declined to comment. Anthropic declined to comment on specific lobbying efforts, but directederhaps the most visible break between OpenAI’s and Anthropic’s policy efforts came over California’s AI safety bill, which went into effect in January. SB 53 requires large AI model makers to create rules and guardrails assessing safety of new models, then to self-report how they addressed them before launching new models—or face fines. Both companies lobbied on the bill. OpenAI reportedly“It’s positioning yourself as advocating for safety or being safety oriented … it was more of a publicity move,” says Kyle Qi of Llama Ventures, which holds an indirect stake in Anthropic. Michael Kleinman, head of U.S. policy at the pro-AI regulation Future of Life Institute, is even more direct: “Until we see the companies actually support meaningful legislation, then what they say about their desire for regulation is empty talk … the industry as a whole is hellbent on avoiding regulation.” Peel back the layers and, at the federal level, the overlap is clearer than the division. For both Anthropic and OpenAI, 2025 lobbying priorities included national security and AI infrastructure. Federal agencies are increasingly using AI systems. In August, OpenAI and Anthropic announced agreements for government agencies to use their models for $1; Elon Musk’s xAI announced a similar partnership for $0.42 in September. In 2024,The pitch to Washington is straightforward: faster scale strengthens U.S. competitiveness. In an October letter to the White House Office of Science and Technology Policy, OpenAI chief global affairs officer Chris Lehane wrote that “the national security imperative to lead the world in AI also presents a once-in-a-century opportunity to strengthen our economy.” He urged the government to ensure frontier AI systems protect U.S. national security interests “including through federal agency adoption.” Anthropic’s filings explicitly reference export controls and the GAIN AI Act. Introduced in October, the bill would further restrict sales of advanced AI chips to adversaries such as China and Russia by granting U.S. customers a right of first refusal before export. The issue has renewed urgency: Nvidia is now permitted to sell its advanced H200 chips to Chinese companies after years of negotiations involving U.S. and Chinese officials.hen there is the issue of buildout. Developing and deploying frontier AI requires vast data centers. Anthropic, OpenAI and legacy incumbents like Google and Meta are racing to expand capacity. They argue that permitting complexity, power bottlenecks, construction delays and local opposition are slowing projects; more than half of data center developments in 2025 were delayed at least three months, according to JLL. The financial commitments are enormous. OpenAI has committed to spending some $1.4 trillion on them over the next eight years, apparentlyBoth companies are lobbying accordingly. Anthropic lobbied on a July executive order aimed at accelerating federal permitting of data center infrastructure. In its October letter, OpenAI called for tax credits and other subsidies, along with streamlined energy and environmental permitting to bring facilities online faster. Anthropic has supported similar policies and recently announced it would cover electricity costs tied to connecting new data centers to the grid. Other bills and issues referenced in those disclosures include the CREATE AI Act—which aims to make AI technology accessible to every American; AI election interference; copyright; an executive order on preventing “woke AI” in the federal government; an AI moratorium; and broader AI safety and governance regulations. It’s still early days, though: legislation that meaningfully changes the course of AI development and deployment in the U.S. has yet to be passed. One key piece of legislation, to make federal AI policy take precedence over state laws and thus make it nearly impossible for states to pass their own AI regulation, ended up as an December executive order rather than in laws passed by Congress. “Washington is woefully behind on AI policy, leaving policymakers dangerously dependent on tech companies for information about the trajectory and impact of emerging technologies that will affect all of us,” says Matt Lerner, managing director of research for Founders Pledge, a nonprofit that pools founders’ giving and has directed philanthropic donations to several AI policy and safety nonprofits . AI regulation is lining up as a midterm fight, with candidates and deep-pocketed philanthropies spending millions to make their positions heard. But for now, the field belongs to the builders. The companies writing the AI model specifications are also writing the biggest checks in Washington—arguing that speed is patriotism, scale is security and that their commercial success is good for the country. Until Congress proves otherwise, they remain both the subject of the rules and the loudest voices shaping them.
Dario Amodei Trump Openai Artificial Intelligence Ai Policy Altman Lobbying Ai Lobbying Politics
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
1 of the Biggest Spotify Podcasts Sets Series 7 Launch Date & Guest LineupSpotify confirms the return of The Louis Theroux Podcast with a Series 7 launch date and new guest lineup.
Read more »
Nike's biggest Chinese challenger has landed in L.A. CountyChinese sportswear giant Anta has opened its first U.S. store, with plans for a nationwide expansion.
Read more »
Read Lisa Rinna’s Biggest Bravo Bombshells From Her New MemoirLisa Rinna reflected on her Bravo days in newly released memoir, 'You Better Believe I’m Gonna Talk About It
Read more »
Colbert Goes Off on Supreme Court’s Three Biggest ‘D*****bags’The three justices who voted to uphold President Trump’s tariffs all have one thing in common, according to the late-night host.
Read more »
Studio Ghibli’s Best 2026 Comeback Is Its Biggest Surprise in 36 YearsJ.R. is an Anime Editor at Screen Rant and has worked in the field since 2022, with bylines at Attack of the Fanboy, The Nerd Stash, and ComicBook.
Read more »
AIs can’t stop recommending nuclear strikes in war game simulationsLeading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases
Read more »
