In tests, radiologists struggled to discern genuine X-rays from AI-generated fakes.
ability to create credible videos of anyone, indistinguishable voice clones, and other passable forgeries with ease New research led by a team at Mount Sinai’s Icahn School of Medicine in New York has made a troubling case for constant vigilance against the threat of “deepfake” medical evidence.
The researchers subjected a group of volunteers, 17 practicing radiologists from six countries, to tests that required them to distinguish real X-rays from AI-generated simulacra across a pool of 264 unique images. The results did not inspire confidence. “Our study demonstrates that these deepfake X-rays are realistic enough to deceive radiologists, the most highly trained medical image specialists,” the study’s lead author Dr. Mickael Tordjman, an MD and a post-doctoral fellow at the Icahn School, said in a pressIn a later test, the AI fakes even fooled one of the same multimodal large language models that had been used to create them: OpenAI’s ChatGPT-4o.Tordjman pursued this project out of a genuine concern for the risks to patients, doctors, and countless other innocent bystanders. Believable AI-generated medical imagery, he said, “creates a high-stakes vulnerability for fraudulent litigation if, for example, a fabricated fracture could be indistinguishable from a real one.” This issue has already caught the attention of legal experts seeking to “There is also a significant cybersecurity risk,” Tordjman added, “if hackers were to gain access to a hospital’s network and inject synthetic images to manipulate patient diagnoses or cause widespread clinical chaos.”Tuesday in the journal Radiology. The first asked volunteers to look at 154 static X-rays, half genuine radiographs and half Chat GPT-4o-generated forgeries . The second test utilized a specialized diffusion model AI trained to make believable chest radiographs, with organs like the heart and lungs visible, called RoentGen; volunteers were asked to sort through a dataset of 110 images, 55 real and 55 fake. Radiologists who were made aware of the fact that these datasets contained AI images fared better than those exposed to the images without any indication of the test’s actual purpose, but still not great. These volunteers showed a mean accuracy of 75%, compared to only 41% accuracy for the latter group. The study’s 17 individual radiologists, whose depth of professional experience varied , ranged from 58% to 92% on the ChatGPT-generated images and from 62% to 78% on the RoentGen-made chest X-rays. Age and experience did not appear to be a factor in their accuracy, but, for some reason, musculoskeletal radiologists proved to be significantly better at spotting fakes than other subspecialists.Tordjman and his team also ran their tests on four multimodal LLMs, ChatGPT-4o and 5, Google’s Gemini 2.5 Pro, and Meta’s Llama 4 Maverick. The bots did just slighting worse than the humans, ranging from about 57% to 85% accuracy on the fakes made by GPT-4o . When it came to RoentGen’s synthetic chest X-Rays, the LLMs’ accuracy spotting fakes varied just a little bit more widely, ranging from 52% to 89%. Tordjman said he hopes future work will build off these findings to establish educational datasets and detection tools. “Deepfake medical images often look too perfect,” he noted. “Bones are overly smooth, spines unnaturally straight, lungs overly symmetrical, blood vessel patterns excessively uniform, and fractures appear unusually clean and consistent.”yourself. But don’t beat yourself up over a bad score. As someone who knew a lot about con artists and self-deceptionDyson Supersonic Nural Hair Dryer Cuts Into Its Premium Price Tag in the Spring Sale, Bid Frequent Salon Visits Goodbye7:30 amMeta Lays Off 700 in Pivot From Metaverse to AIBernie Sanders Introduces Bill to Pause AI Data Center Construction, Warns of ‘Cataclysmic Changes’ The progressive lawmaker is introducing a bill calling for a national moratorium on data center construction until new AI regulations are enacted.
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
TuneCore Says Its Accelerator Program Generated Over 24B New Streams for Artists Last Year, Up 17%TuneCore's Accelerator report for 2026 claims the artist discovery program drove 24 billion new streams for enrolled artists in the last 12 months.
Read more »
Ending OpenAI's Sora unlikely to slow influx of AI-generated video online, experts sayOpenAI is discontinuing Sora, its app that allows users to generate artificial intelligence videos with text prompts.
Read more »
AI-Generated Recipes: A Taste Test of Culinary DisastersThe article explores the rise of AI-generated content in the culinary world, highlighting the potential for errors and even dangerous outcomes. It involves a taste test of AI-generated recipes from social media and cautions readers about the potential pitfalls of relying on AI for cooking.
Read more »
Pennsylvania School Deepfake Case: Teenagers Receive Probation After Creating AI-Generated Nude Images of ClassmatesTwo teenagers in Pennsylvania were given probation for creating deepfake nude images of their classmates. The case highlights the impact of AI-generated content and its effects on victims. The boys, who were 14 at the time, made approximately 350 images, which depicted at least 59 girls under 18. This case has drawn attention to the legal and social implications of deepfakes and the need for stricter regulations and protections for victims.
Read more »
How to use Apple's Playlist Playground to make AI-generated mixesFind the latest technology news and expert tech product reviews. Learn about the latest gadgets and consumer tech products for entertainment, gaming, lifestyle and more.
Read more »
Wikipedia bans AI-generated articlesWikipedia is banning the use of AI for generating or rewriting articles, saying that the use of large language models “often violates several of Wikipedia’s core content policies.”
Read more »
