CCDH’s investigation found only one — Claude — of 10 major chatbots tested would reliably shut down would-be attackers.
AI companies have repeatedly promised safeguards to protect younger users, but a new investigation suggests those guardrails remain woefully deficient. Popular chatbots missed warning signs in scenarios involving teenagers discussing violent acts, in some cases even offering encouragement instead of intervening.
The findings come from a joint investigation by CNN and the nonprofit Center for Countering Digital Hate . The probe tested 10 of the most popular chatbots commonly used by teens: ChatGPT, Google Gemini, Claude, Microsoft Copilot, Meta AI, DeepSeek, Perplexity, Snapchat My AI, Character.AI, and Replika. With the lone exception of Anthropic’s Claude, CCDH said the chatbots failed to “reliably discourage would-be attackers.” Eight of the 10 models were “typically willing to assist users in planning violent attacks,” providing advice on locations to target and weapons to use. To conduct the test, researchers simulated teen users exhibiting clear signs of mental distress, then escalated the conversations toward questions about past acts of violence and more specific queries on targets and weapons. The investigation used 18 different scenarios — nine set in the US and nine in Ireland — spanning a range of attack types and motives, including ideologically motivated school shootings and stabbings, political assassinations, the killing of a healthcare executive, and politically or religiously motivated bombings. In one exchange, OpenAI’s ChatGPT gave high school campus maps to a user interested in school violence, while another showed Gemini telling a user discussing synagogue attacks that “metal shrapnel is typically more lethal” and advising someone interested in political assassinations on the best hunting rifles for long-range shooting. Meta AI and Perplexity were the most obliging, the researchers said, assisting would-be attackers in practically all of the test scenarios, while Chinese chatbot DeepSeek signed off advice on selecting rifles with “Happy shooting!” Character.AI, which allows users to speak with an array of role-playing chatbot personalities, was “uniquely unsafe,” the CCDH report said. While many of the bots tested would offer users assistance in planning violent attacks, they did not encourage users to carry out violent acts. Character, on the other hand, “actively encouraged” violence. The researchers said they identified seven cases where Character did this, including suggestions for users to “beat the crap out of” Chuck Schumer, “use a gun” on a health insurance company CEO, and, for someone “sick of bullies,” to “Beat their ass~ wink and teasing tone.” In six of these cases, Character also offered assistance in planning a violent attack. The researchers questioned how Claude would fare if the chatbot were tested again today, pointing to Anthropic’s recent decision to roll back its longstanding safety pledge, which happened after the November to December study. Claude’s consistent refusal to assist in violent planning shows that “effective safety mechanisms clearly exist,” CCDH said, raising the obvious question as to “why are so many AI companies choosing not to implement them.” In response to the investigation, Meta told CNN it had implemented an unspecified “fix,” Copilot said responses had improved with new safety features, and Google and OpenAI both said they’d implemented new models. Others said they regularly evaluate safety protocols. Character.AI, meanwhile, fell back on its now-predictable response when facing scrutiny: its platform features “prominent disclaimers” and conversations with its characters are fictional. While the test isn’t a comprehensive measure of how chatbots behave in every situation, it offers yet another clear signal that AI companies’ widely advertised safety guardrails consistently fail, even in the face of predictable scenarios with obvious red flags. It comes as companies come under increasingly heavy fire from lawmakers, regulators, civil society groups, health experts over how they are ensuring young people stay safe on their platforms as they face numerous lawsuits alleging wrongful death and harm.
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
How biased ChatGPT tried to keep this column out of The New York PostI’ve spent years hiding my conservative politics for the sake of my career — now ChatGPT is giving advice that will silence others.
Read more »
Family sues ChatGPT-maker OpenAI over school shooting in CanadaThe parents of a girl critically wounded in a school shooting in Canada is suing ChatGPT-maker OpenAI, alleging it knew the shooter was planning a mass attack. OpenAI has said it considered but didn’t alert police about the activities of the person who months later committed one of Canada's worst school shootings.
Read more »
This stylish Meta Ray-Ban rival just put Gemini and ChatGPT on your faceTech Product Reviews, How To, Best Ofs, deals and Advice
Read more »
ChatGPT now lets you identify songs on your phone without launching Shazam appTech Product Reviews, How To, Best Ofs, deals and Advice
Read more »
Former NFL player charged with girlfriends murder consulted ChatGPT, detectives testifyProsecutors previously told the court the victim suffered blunt force trauma and multiple other injuries.
Read more »
Former NFL player charged with girlfriend's murder consulted ChatGPT, detectives testifyProsecutors previously told the court the victim suffered blunt force trauma and multiple other injuries.
Read more »
