Josh has worked a freelance writer for the past ten years, writing news and features focusing on the gaming, science, and tech industries. He has covered big events like E3, CES, and a slew of other smaller press events oriented around the latest consumer technology and gadgets.
Aug. 7, 2025 5:22 pm ESTNew findings from a group of researchers at the Black Hat hacker conference in Las Vegas has revealed that it only takes one"poisoned" document to gain access to private data using ChatGPT that has been connected to outside services.
One of the ways that OpenAI has made ChatGPT even more useful for its userbase is by allowing you to connect it to various outside services, like Google Drive, GitHub, and more. But connecting ChatGPT to these private data storage solutions could actually put your data at risk of being exposed, the new research shows., was designed by researchers Michael Bargury and Tamir Ishay Sharbat. When utilized, it shows that indirect prompt injection is possible through a single document that has been inlaid with the right instructions. When used, this kind of attack could give bad actors access to developer secrets like API keys and more. For instance, in this case, the researchers included an invisible prompt injection payload in a document before it was uploaded to ChatGPT. When an image in the document is rendered by ChatGPT, a request is automatically sent to the attacker's server using the invisible prompt. Just like that, the data has been stolen, and the victim is none the wiser.These indirect prompt attacks are part of a new style of hack that has been popping up on the AI security scene more and more in recent months. In fact, other research released this week also shows thatusing an infected calendar invite. These indirect prompt attacks are just one way that AI has proven susceptible to the whims of bad actors. And the concerns surrounding these types of attacks are only growing, especially as people like the Godfather of AI say thatOne of the reasons this type of attack is so dangerous is that the user doesn't need to do anything beyond connecting ChatGPT to their Google Drive or GitHub account. From there, if a"poisoned" document with indirect prompt instructions embedded in it is added to their files, it could give bad actors access to the data stored in their account.in action to get an idea of just how simple it is and how quickly it works. Of course, connecting AI to your external accounts can be extremely helpful, and that's one way that developers make use of various AI systems, as it allows them to connect AI to their existing databases without needing to move their code over to any additional tools. But, as the researchers notes, giving AI more power can open you up to even more risk.
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Federal push to safeguard farmlands signals new era of food security as national securityThe federal government is getting serious about protecting America’s farmlands from foreign adversaries.
Read more »
Federal push to safeguard farmlands signals new era of food security as national securityThe federal government is getting serious about protecting America’s farmlands from foreign adversaries.
Read more »
Federal push to safeguard farmlands signals new era of food security as national securityThe federal government is getting serious about protecting America’s farmlands from foreign adversaries.
Read more »
Hackers Used Simple Password to Access McDonald’s AI Hiring Bot Applicant DataSecurity researchers gained access to years of applicant data using an absurdly insecure password.
Read more »
Security Researchers Found A Serious Vulnerability In A Popular Vibe Coding PlatformJosh has worked a freelance writer for the past ten years, writing news and features focusing on the gaming, science, and tech industries. He has covered big events like E3, CES, and a slew of other smaller press events oriented around the latest consumer technology and gadgets.
Read more »
Researchers hacked Google Gemini to take control of a smart homeFind the latest technology news and expert tech product reviews. Learn about the latest gadgets and consumer tech products for entertainment, gaming, lifestyle and more.
Read more »
