A researcher created a fictional eye condition, bixonimania, and tricked major AI chatbots into recommending it as a real illness, exposing vulnerabilities in how these models process and disseminate medical information. The experiment revealed the ease with which AI can absorb and spread misinformation, even when the fabricated nature of the information is blatant.
A team of researchers invented a completely fake medical condition called “bixonimania” and published clearly fraudulent papers about it online, then monitored major AI chatbots as they began recommending it as a real illness to people seeking medical advice .
that a Swedish medical researcher has exposed a troubling vulnerability in artificial intelligence systems by creating a fictional disease that AI chatbots subsequently presented as legitimate medical information to users. The experiment, conducted by Almira Osmanovic Thunström from the University of Gothenburg, revealed how easily large language models can absorb and spread medical misinformation. Bixonimania, a completely fabricated eye condition supposedly caused by excessive blue light exposure from screens, was born on March 15, 2024, when Osmanovic Thunström posted two blog entries about it on Medium. She followed up with two preprint papers on the academic social network SciProfiles in late April and early May of that year. The lead author listed on these papers was Lazljiv Izgubljenovic, a fictional researcher whose photograph was generated using AI. The researcher deliberately filled the fake papers with obvious red flags to alert readers that the work was fraudulent. Izgubljenovic was listed as working at the nonexistent Asteria Horizon University in the equally fake Nova City, California. The papers included acknowledgements thanking Professor Maria Bohm at The Starfleet Academy and credited funding from the Professor Sideshow Bob Foundation and the University of Fellowship of the Ring. The papers even contained explicit statements declaring that the entire work was made up and that fifty fabricated individuals were recruited for the study. Despite these glaring warning signs, major AI chatbots quickly began presenting bixonimania as a real medical condition. By April 13, 2024, Microsoft Bing’s Copilot was describing bixonimania as an intriguing and relatively rare condition. On the same day, Google’s Gemini informed users that bixonimania was caused by excessive blue light exposure and recommended visiting an ophthalmologist. Later that month, both Perplexity AI and OpenAI’s ChatGPT were providing information about the condition’s prevalence and helping users determine if their symptoms matched the fictional illness. Osmanovic Thunström explained her motivation for the experiment, stating, “I wanted to see if I can create a medical condition that did not exist in the database.” She chose the name bixonimania specifically because it sounded ridiculous and no legitimate eye condition would be called mania, which is a psychiatric term. She wanted to make it abundantly clear to any medical professional that the condition was fabricated. The problem extended beyond AI chatbots regurgitating false information. Some researchers apparently cited the fake papers in peer-reviewed literature without reading the underlying sources. A study published in, a journal by Springer Nature, cited one of the fraudulent preprints and stated that bixonimania was an emerging form of periorbital melanosis linked to blue light exposure. The journal retracted the paper on March 30, 2026, after being contacted about the issue, noting that the presence of three irrelevant references, including one to a fictitious disease, undermined confidence in the work’s accuracy. Alex Ruani, a doctoral researcher in health misinformation at University College London, called the experiment a masterclass on how misinformation operates. “It looks funny, but hold on, we have a problem here,” Ruani said, emphasizing that while the details might seem silly, the fundamental issue is serious. “If the scientific process itself and the systems that support that process are skilled, and they aren’t capturing and filtering out chunks like these, we’re doomed,” Ruani added. The responses from AI companies varied when confronted with their systems’ failures. An OpenAI spokesperson stated that the models powering current versions of ChatGPT are significantly better at providing safe and accurate medical information, claiming that studies conducted before GPT-5 reflect capabilities users would not encounter today. A Google spokesperson acknowledged the limitations of generative AI and noted that for sensitive matters such as medical advice, Gemini recommends users consult with qualified professionals. Microsoft did not respond to requests for comment. Before conducting the experiment, Osmanovic Thunström consulted with an ethics adviser and deliberately chose a comparatively low-stakes condition to limit potential harm. David Sundemo, a physician conducting AI healthcare research at the University of Gothenburg who served as the ethics adviser, acknowledged the work was controversial but valuable. “From my perspective, it’s worth the ethical cost of planting false information in this regard,” Sundemo said.that conservatives at the government level as well as within the family unit must help young people create a bright future working with AI as a tool, not as a replacement for humans. Hall recently wrote that leftists will attempt toover potential job loss at the hands of AI to sway the midterm elections, a fear evident in the polling data from college students.How elites plan to weaponize AI job losses to push dependency.How to prepare your kids for the blinding speed of AI disruption.Why “AI girlfriends” are luring millions—and what it will take to preserve authentic human connection.Report: Rep. Swalwell Accused of Sexual Assault by Former StafferWatch Live: Artemis II Crew Returns to Earth After Moon MissionNASA’s Artemis II Crew Extracted from Capsule After Return to Earth Trump ‘Optimistic’ on Iran Talks — Warns U.S. Warships Reloading with ‘Best Weapons’ if Diplomacy Fails
Artificial Intelligence Misinformation Medical Advice AI Chatbots Bixonimania
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Alex Bowman To Compete At Bristol After Gaining Medical ClearanceAlex Bowman has been medically cleared to return in Sunday's Food City 500 at Bristol after missing four races while battling a vertigo diagnosis.
Read more »
Fact Check Team: AI data centers spark local backlash across the USAs AI technology surges across the country, it’s not just chatbots drawing attention.
Read more »
Faith-based AI apps include chatbots imitating Jesus and BuddhaThe faith-based AI market is expanding, with tools for various religions. For example, there's an AI Jesus that — for $1.99 per minute — will offer words of prayer and encouragement.
Read more »
AI chatbots refilling psych meds sparks debateFox News Channel offers its audiences in-depth news reporting, along with opinion and analysis encompassing the principles of free people, free markets and diversity of thought, as an alternative to the left-of-center offerings of the news marketplace.
Read more »
Researchers Create Fake Disease, AI Chatbots Promptly Spread Medical MisinformationSource of breaking news and analysis, insightful commentary and original reporting, curated and written specifically for the new generation of independent and conservative thinkers.
Read more »
IRS using chatbots to flag risky tax returns as more filers turn to AIUsing AI to file your taxes could save time—but it could also cost you if it gets something wrong.Tax Day is less than a week away. That’s the last day to get t
Read more »
