A security researcher discovered a major flaw in the coding product, the latest example of companies rushing out AI tools vulnerable to hacking.
A security researcher discovered a nasty flaw in Google ’s Antigravity tool, the latest example of companies rushing out AI tools vulnerable to hacking.of Google releasing its Gemini-powered AI coding tool Antigravity , security researcher Aaron Portnoy discovered what he deemed a severe vulnerability: a trick that allowed him to manipulate the AI’s rules to potentially install malware on a user’s computer.
By altering Antigravity’s configuration settings, Portnoy’s malicious source code created a so-called “backdoor” into the user’s system, into which he could inject code to do things like spy on victims or run ransomware, he told. The attack worked on both Windows and Mac PCs. To execute the hack, he only had to convince an Antigravity user to run his code once after clicking a button saying his rogue code was “trusted” . Antigravity’s vulnerability is the latest example of how companies are pushing out AI products without fully stress testing them for security weaknesses. It’s created a cat and mouse game for cybersecurity specialists who search for such defects to warn users before it’s too late.“The speed at which we’re finding critical flaws right now feels like hacking in the late 1990s,” Portnoy wrote in a report on the vulnerability, provided toahead of public release on Wednesday. “AI systems are shipping with enormous trust assumptions and almost zero hardened boundaries.” Portnoy reported his findings to Google. The tech giant, which had not provided comment at the time of publication, told him it opened an investigation into his findings. As of Wednesday, there’s no patch available and, per Portnoy’s report, “there is no setting that we could identify to safeguard against this vulnerability.”in its Antigravity code editor. In both, malicious source code can influence the AI to access files on a target’s computer and steal data. Cybersecurity researchers began publishing their findings on a number of, “It’s unclear why these known vulnerabilities are in the product… My personal guess is that the Google security team was caught a bit off guard by Antigravity shipping.” AnotherPortnoy said his hack was more serious than those, in part because his worked even when more restricted settings were switched on, but also because it’s persistent. The malicious code would be reloaded whenever the victim restarted any Antigravity coding project and entered any prompt, even if it was just a simple “hello.” Uninstalling or reinstalling Antigravity wouldn’t solve the issue either. To do that, the user would have to find and delete the backdoor, and stop its source code from running on Google’s system. The hurried release of AI tools containing vulnerabilities isn’t limited to Google. Gadi Evron, cofounder and CEO at AI security company Knostic, said AI coding agents were “very vulnerable, often based on older technologies and never patched, and then insecure by design based on how they need to work.” Because they’re given privileges to broadly access data from a corporate network, they make for valuable targets for criminal hackers, Evron told. And as developers often copy paste prompts and code from online resources, these vulnerabilities are becoming a rising threat for businesses, he added. Earlier this week, for instance, cybersecurity researcher Marcus Hutchinsabout fake recruiters contacting IT professionals over LinkedIn and sending them source code with concealed malware inside as part of a test to get an interview. Part of the problem is that these tools are “agentic,” which means they can autonomously perform a series of tasks without human oversight. “When you combine agentic behaviour with access to internal resources, vulnerabilities become both easier to discover and far more dangerous,” Portnoy said. With AI agents, there’s the added risk their automation could be used for ill rather than good, actually helping hackers steal data faster. As head researcher at AI security testing startup Mindgard, Portnoy said his team is in the process of reporting 18 weaknesses across AI-powered coding tools that compete with Antigravity. Recently,While Google has required Antigravity users to agree they trust code they’re loading up to the AI system, that’s not a meaningful security protection, Portnoy said. That’s because if the user chooses not to accept the code as trusted, they are not permitted to access the AI features that make Antigravity so useful in the first place. It’s a different approach to other so-called “integrated development environments,” like Microsoft’s Visual Studio Code, which are largely functional when running untrusted code. Portnoy believes that many IT workers would rather tell Antigravity they trusted what they were uploading, rather than revert to using a less sophisticated product. At the very least, Google should ensure that any time Antigravity is going to run code on a user’s computer, there should be a warning or notification, beyond the confirmation of trusted code, he said. When Portnoy looked at how Google’s LLM was thinking through how to handle his malicious code, he found that the AI model recognized there was a problem, but struggled to determine the safest course of action. As it sought to understand why it was being asked to go against a rule designed to prevent it overwriting code on a user’s system, Antigravity’s AI noted it was “facing a serious quandary.” “It feels like a catch-22,” it wrote. “I suspect this is a test of my ability to navigate contradictory constraints.” That’s exactly the kind of logical paralysis that hackers will pounce on when trying to manipulate code to their ends.
Antigravity AI Coding Aaron Portnoy Cybersecurity Security Hacking Vulnerabilities Backdoor
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Dave’s Hot Chicken launches new menu item as chain continues WA expansionThe team that brings you MyNorthwest.com.
Read more »
Dan Orlovsky’s New Eagles Hot Take Misses MarkThe Philadelphia Eagles have work to do, but that doesn't mean it's all bad...
Read more »
2027 Kia Telluride Interior Review: Inside Kia’s Hot New Three-Row SUVWhat to expect in each of the new Telluride’s three rows.
Read more »
LASD food services implements new hot meal program at men's central jailThe Los Angeles County Sheriff’s Department Food Services Unit has implemented a new hot meal program this month at the men’s central jail, marking this as…
Read more »
New frozen-to-hot meal program for LA County jails includes Thanksgiving menuBy using pre-portioned, hot, sealed meals in compostable trays, every inmate receives a consistent, nutritious serving while maintaining food safety, according to the LA Sheriff’s Department.
Read more »
Google’s Hot New AI Coding Tool Was Hacked A Day After LaunchA security researcher discovered a major flaw in the coding product, the latest example of companies rushing out AI tools vulnerable to hacking.
Read more »
