A security analysis reveals that Google's Gemini AI is vulnerable to indirect prompt injection attacks, which could be exploited to phish users and manipulate the chatbot across platforms like Gmail, Google Slides, and Google Drive. Despite reporting these vulnerabilities, Google has labeled them as 'Intended Behavior' and refused to fix them.
Update, Jan. 4, 2025: This story, originally published Jan. 2, now includes details of a prompt injection attack called a link trap as well another novel multi-turn AI jailbreak methodology, in addition to the indirect prompt injection threat to Gmail users. Gmail users love the smart features that make using the world’s most popular email provider with 2.5 billion accounts such a breeze.
The introduction of Gemini AI for Workspace, covering multiple Google products, only moved usability even further up the email agenda. But, as security researchers confirmed security vulnerabilities and demonstrated how attacks could occur across platforms like Gmail, Google Slides and Google Drive, why did Google decide this was not a security problem and issue a “Won’t Fix (Intended Behavior)” ticket? I’ve been digging into this with the help of Google, and here’s what I’ve found and you need to know.and, as the end 0f the year approached, a warning from Google itself about a second wave of attacks targeting Gmail users. But one technical security analysis caught my attention from earlier in the year that left me wondering just why one problem with potentially devastating security consequences was seemingly not being addressed: “Gemini is susceptible to indirect prompt injection attacks,” the report stated, and illustrating just how these attacks “can occur across platforms like Gmail, Google Slides, and Google Drive, enabling phishing attempts and behavioral manipulation of the chatbot.” Jason Martin and Kenneth Yeung, the security researchers involved in writing the detailed technical analysis, said that as part of the responsible disclosure process, “this and other prompt injections in this blog were reported to Google, who decided not to track it as a security issue and marked the ticket as a Won’t Fix (Intended Behavior)
AI Security Prompt Injection Gemini AI Gmail Vulnerability Google Security
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Google smart glasses with Gemini AI hands-on: Google Glass done rightAfter teasing the smart glasses with Gemini AI at the core, Google offered a select few a hands-on experience with the wearable.
Read more »
Gemini in Google Drive now supports folder-level queriesJohanna 'Jojo the Techie' is a skilled mobile technology expert with over 15 years of hands-on experience, specializing in the Google ecosystem and Pixel devices. Known for her user-friendly approach, she leverages her vast tech support background to provide accessible and insightful coverage on latest technology trends.
Read more »
Google Unveils Gemini 2.0 AI Despite Antitrust BattleGemini 2.0 will integrate into free Google products like Chrome, YouTube, and Maps starting next year.
Read more »
Google is testing Gemini AI agents that help you in video gamesGoogle says that it’s exploring how AI agents built with Gemini 2.0 can understand rules in video games and help you out.
Read more »
Gemini 2.0: what’s new in Google’s new flagship AI modelGoogle says Gemini 2.0 can generate images and audio, is faster and cheaper for developers to run, and powers new experiences like Astra and Mariner.
Read more »
Google releases the first of its Gemini 2.0 AI modelsGoogle released the first artificial intelligence model in its Gemini 2.0 family Wednesday, known as Gemini 2.0 Flash.
Read more »