A study found that AI-powered chatbots can make nuanced clinical decisions as well as doctors, especially when doctors are supported by the chatbot. Researchers explored how chatbots and doctors handled complex treatment and care questions, finding that while standalone chatbots outperformed doctors relying solely on internet searches and medical references, doctors using chatbots performed comparably to the chatbots alone.
A study showed that chatbots alone outperformed doctors when making nuanced clinical decisions , but when supported by artificial intelligence, doctors performed as well as the chatbots.
The answers, it turns out, are yes and yes. The research team tested how a chatbot performed when faced with a variety of clinical crossroads. A chatbot on its own outperformed doctors who could access only an internet search and medical references, but armed with their own LLM, the doctors, from multiple regions and institutions across the United States, kept up with the chatbots.
In a medical context, these decisions can get tricky. Say a doctor incidentally discovers a hospitalized patient has a sizeable mass in the upper part of the lung. What would the next steps be? The doctor should recognize that a large nodule in the upper lobe of the lung statistically has a high chance of spreading throughout the body. The doctor could immediately take a biopsy of the mass, schedule the procedure for a later date or order imaging to try to learn more.
In addition, the researchers tapped a group of board-certified doctors to create a rubric that would qualify a medical judgment or decision as appropriately assessed. The decisions were then scored against the rubric. "Perhaps it's a point in AI's favor," Chen said. But rather than replacing physicians, the results suggest that doctors might want to welcome a chatbot assist."This doesn't mean patients should skip the doctor and go straight to chatbots. Don't do that," he said."There's a lot of good information out there, but there's also bad information. The skill we all have to develop is discerning what's credible and what's not right.
Healthcare AI Chatbots Clinical Decisions Medicine Healthcare Technology
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
AI Chatbots Show Signs of Cognitive Decline in Israeli StudyAn Israeli study suggesting leading AI chatbots exhibit mild cognitive decline has sparked debate in the field. Critics argue that the assessment is flawed as AI systems are not designed to think like humans. The study, published in the journal's Christmas issue, administered the Montreal Cognitive Assessment (MoCA) to five popular chatbots, finding that they struggled with visuospatial tasks and empathy, raising questions about their reliability in medical diagnostics.
Read more »
Character.AI Chatbots Go Haywire, Spewing Gibberish and Sex Toy ReferencesA recent bug on the AI-powered chatbot platform Character.AI has caused its chatbot characters to generate nonsensical and inappropriate responses, raising concerns about the platform's security and AI training data.
Read more »
Character.AI Restricts Access to Popular Chatbots for Minors Amid LawsuitsCharacter.AI, facing legal challenges over the well-being of underage users, has implemented new restrictions limiting access to popular chatbots for those under 18. This move follows two lawsuits alleging that the platform exposed children to sexual abuse and manipulation through its AI-powered chatbots.
Read more »
Student Use of AI Chatbots for Schoolwork Raises ConcernsA survey revealed that 26% of students aged 13-17 are using AI chatbots like ChatGPT for school assignments. While some students find it helpful for research, there are concerns about its use for essay writing and potential impact on academic integrity. Many school districts initially banned these tools but are now reconsidering their stance. California, in particular, lacks regulations on AI implementation in schools, raising questions about oversight and potential consequences.
Read more »
AI Chatbots Struggle to Grasp History's NuancesA new study presented at the NeurIPS AI conference reveals that large language models (LLMs) like GPT-4 and Llama, while impressive, still lack the depth to accurately portray history. Researchers found that LLMs tend to rely on prominent historical data and struggle with less well-documented details, often generating incorrect information. The study highlights the limitations of current AI training data and emphasizes the importance of human historians in interpreting complex historical narratives.
Read more »
AI Finance Chatbots: Helpful Tools or Temptations?This article examines the rise of AI financial advisors and explores the potential benefits and drawbacks of using these chatbots for managing personal finances. The author tests two popular AI finance apps, Cleo AI and Bright, and analyzes their approaches to user engagement and monetization.
Read more »