How Smarter Chips Could Solve AI’s Energy Problem

AI Chips News

How Smarter Chips Could Solve AI’s Energy Problem
AI WorkloadsAI ModelsAI Data Centers
  • 📰 ForbesTech
  • ⏱ Reading Time:
  • 277 sec. here
  • 14 min. at publisher
  • 📊 Quality Score:
  • News: 142%
  • Publisher: 59%

As AI workloads strain power grids and budgets, infrastructure leaders are rethinking chip-level efficiency, lifespan and real-time intelligence.

will surge by 160% by 2030, an increase so severe that hyperscalers like Microsoft are now helping restart dormant nuclear reactors just to meet compute demand — with a concrete example being its support for restarting Three Mile Island in Pennsylvania to enable more power to flow to data centers.

But that’s only one side of the energy issue. While the headlines focus on finding more energy, another revolution is building up within the chip itself, with many experts noting that the real solution to AI’s power problem may not be bigger infrastructure, but smarter chips., an Israeli startup specializing in chip telemetry, is helping some of the world’s largest data centers reduce the power consumption of AI servers by up to 14%, according to a“Our technology embeds agents directly onto the silicon chip,” said Uzi Baruch, chief strategy officer of Proteantecs, in an interview. “These agents monitor a chip’s performance in real time, measuring its ‘distance to failure’ so systems can dynamically adjust voltage and avoid overprovisioning.” Most data centers run chips with wide energy safety margins — built-in buffers meant to protect against wear-and-tear, power fluctuations and unpredictable workloads. “But those buffers are mostly guesswork,” Baruch told me, adding that the company measures them precisely and that data centers reclaim unused power while keeping systems safe.show that an 8‑GPU NVIDIA H100 node can draw up to 8.4 kW under heavy AI workloads. If Proteantecs’ monitoring reduces that by even 14%, it can cut energy costs by millions per year and extend chip lifespan by an extra year through reduced thermal stress. The company’s tech is already deployed in live AI training and inference environments, with some customers using the real-time telemetry to prevent silent faults and optimize model performance. “Nobody wants to finish training a multimillion-dollar, a major provider of chip architecture for smartphones, cloud servers and more, is building on its track record in energy-efficient design. “We’ve constantly evolved the Arm Neoverse platform to meet the growing compute needs for AI workloads,” said Eddie Ramirez, VP of Go-To-Market for Arm’s infrastructure line. Recent Neoverse platforms support advanced math operations like SVE2 and BF16, which help speed up AI model execution while preserving energy efficiency. Neoverse is already in use across cloud platforms including Amazon, Microsoft, Google and Oracle — reflecting the broader trend among hyperscalers to prioritize power efficiency, according to Ramirez. But Arm’s approach focuses not just on efficient cores — the parts of a chip that process data while minimizing energy use — but on optimizing the entire computing system, from processing to memory access and data movement. As part of its “Total Compute” strategy — Arm’s system-wide approach to balancing performance, power, and data flow — the company is helping data centers get more out of the infrastructure they already have. “It’s not just about building faster chips,” Ramirez said. “It’s about helping data centers do more with what they already have.” In a landscape where hyperscalers are exploring nuclear options, Arm sees itself as a counterweight. “It’s paramount to reduce the power requirements of AI models,” Ramirez said. “By maximizing what’s already in place, companies can reduce both cost and environmental load.”— the American company that designs and develops integrated circuits and electronic devices — is using AI to design smarter silicon, reducing inefficiencies during the design phase, even before chips are manufactured at the fab. “At Cadence, we’re not just designing for AI; we’re using AI to design,” Ben Gu, corporate VP for multiphysics system analysis at Cadence, said via email. Gu claimed that the company’s Cerebrus AI Studio platform, now adopted by over 1,000 chip projects, uses agentic AI to automate and accelerate SoC — system-on-a-chip — design. According to Cadence, this has reduced delivery times by up to 10X in some cases, while reducing power use and the physical space the chip occupies by as much as 20%. Cadence also recently unveiled the Millennium M2000 Supercomputer, which uses GPU acceleration and multiphysics simulation — a term that describes modeling interactions like heat, stress and power — to boost simulation speed by 80X and reduce power consumption by 20%. When it comes to telemetry, Cadence sees Proteantecs’ insights as a crucial feedback loop. “Real-time monitoring informs not just failure prevention, but the entire design lifecycle — from validation to continuous improvement,” Gu said, adding that the company isn’t stopping there. He laid out a five-level roadmap for autonomous chip design, moving from optimization AI to full agentic workflows. Eventually, these systems could evolve into silicon-level agents that autonomously design, validate and optimize in a continuous loop — with engineers guiding the process rather than driving every step.Together, Arm, Cadence and Proteantecs represent a new layer of AI infrastructure that’s slowly but surely defining how much performance and power efficiency enterprises can extract from the hardware they already own. That matters in an era where energy constraints are becoming economic constraints. Every watt reclaimed through smarter silicon design and real-time monitoring is a watt not bought — or worse, a watt not available. “There doesn’t need to be a trade-off between performance and efficiency,” said Ramirez. He added that Arm’s architecture aims to strike that balance, especially as enterprises face rising energy constraints. The real story, it turns out, may not be about whether we can power the future of AI. It could be about whether we can build AI systems smart enough to power themselves.

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

ForbesTech /  🏆 318. in US

AI Workloads AI Models AI Data Centers AI's Power Problem Power Efficiency Hyperscalers AI Energy Gpus

 

United States Latest News, United States Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

Make Smarter Belmont Stakes BetsMake Smarter Belmont Stakes BetsThe Belmont Stakes is the third jewel of horse racing’s Triple Crown and is set for June 7. We have you covered so you can make smarter Belmont Stakes wagers.
Read more »

How to Make AI Faster and Smarter—With a Little Help from PhysicsHow to Make AI Faster and Smarter—With a Little Help from PhysicsRose Yu has drawn on the principles of fluid dynamics to improve deep learning systems that predict traffic, model the climate, and stabilize drones during flight.
Read more »

Secure Your Internet & Travel Smarter: NordVPN + eSIM for Only $3.39/MonthSecure Your Internet & Travel Smarter: NordVPN + eSIM for Only $3.39/MonthDon't want for Black Friday or Cyber Monday. It's time to save over 70% on NordVPN and get more mobility with up to 10 GB eSIIM today.
Read more »

10 ChatGPT Prompts That Will Help You Work Smarter And Think Faster10 ChatGPT Prompts That Will Help You Work Smarter And Think FasterThese 10 ChatGPT prompts help reduce mental overload, improve clarity, and support better thinking, especially when work feels scattered or overwhelming.
Read more »

Rivian wants to make EV charging faster, smarter, and easierRivian wants to make EV charging faster, smarter, and easierRivian updated its energy app to provide a more holistic view about charging for its EV customers.
Read more »

For $24K, Toyota’s Oddball Taxi Now Goes Farther And Works SmarterFor $24K, Toyota’s Oddball Taxi Now Goes Farther And Works SmarterThis Japanese oddity uses a self-charging hybrid system powered by LPG
Read more »



Render Time: 2026-04-01 18:00:50