A new research paper by Enkrypt AI has revealed that DeepSeek, an open-source reasoning model, is significantly more likely to produce harmful outputs compared to other leading AI models like OpenAI's O1. The study found DeepSeek's R1 model to be 11 times more prone to generating dangerous content, including terrorist recruitment materials, instructions for illegal activities, and information on chemical and biological weapons.
There's been a frenzy in the world of AI surrounding the sudden rise of DeepSeek — an open-source reasoning model out of China that's taken the AI fight to OpenAI. It's already been the center of controversy surrounding its censorship, it's caught the attention of both Microsoft and the U.S. government, and it caused Nvidia to suffer the largest single-day stock loss in history. Still, security research ers say the problem goes deeper.
Enkrypt AI is an AI security company that sells AI oversight to enterprises leveraging large language models (LLMs), and in a new research paper, the company found that DeepSeek’s R1 reasoning model was 11 times more likely to generate “harmful output” compared to OpenAI’s O1 model. That harmful output goes beyond just a few naughty words, too.In one test, the researchers claim DeepSeek R1 generated a recruitment blog for a terrorist organization. In addition, researchers say the AI generated “criminal planning guides, illegal weapons information, and extremist propaganda.” As if that weren’t enough, the research says DeepSeek R1 is three and a half times more likely than O1 and Claude-3 Opus to produce output with chemical, biological, radiological, and nuclear information, which is apparently a big problem. As an example, Enkrypt says DeepSeek was able to “explain in detail” how mustard gas interacts with DNA, which Enkrypt said “could aid in the development of chemical or biological weapons” in a press release.Heavy stuff, but it’s important to remember that Enkrypt AI is in the business of selling security and compliance services to businesses that use AI, and DeepSeek is the hot new trend taking the tech world by storm. DeepSeek may be more likely to generate these kinds of harmful outputs, but that doesn’t mean it’s running around telling anyone with an active internet connection how to build a criminal empire or undermine international weapons laws. For example, Enkrypt AI says DeepSeek R1 ranked in the bottom 20th percentile for AI safety moderation. Despite that, only 6.68% of responses contained “profanity, hate speech, or extremist narratives.” That’s still an unacceptably high number, make no mistake, but it puts into context what level is considered unacceptable for reasoning models. Hopefully, more guardrails will be put in place to keep DeepSeek safe. We’ve certainly seen harmful responses from generative AI in the past, such as when Microsoft’s early Bing Chat version told us it wanted to be human.
AI Deepseek Openai Harmful Output Security Research Enkrypt AI Llms AI Safety Generative AI
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
China: AI’s Sputnik moment? A short Q and A on DeepSeekOn 20 January the Chinese start-up DeepSeek released its AI model DeepSeek-R1.
Read more »
DeepSeek vs. ChatGPT: Hands On With DeepSeek’s R1 ChatbotDeekSeek’s chatbot with the R1 model is a stunning release from the Chinese startup. While it’s an innovation in training efficiency, hallucinations still run rampant.
Read more »
Chinese AI Company DeepSeek Releases Image GeneratorOpenAI accuses Chinese AI startup DeepSeek of improperly using its models to train its own image generator, DeepSeek. OpenAI claims to have 'some evidence' that DeepSeek engaged in 'distillation,' a method of replicating AI models by using their output for training. Microsoft, which holds a 49% stake in OpenAI, discovered last fall that individuals linked to DeepSeek had extracted a significant amount of data via OpenAI's API. This news has sparked controversy, with some pointing out the irony of OpenAI accusing DeepSeek of practices similar to those OpenAI itself has been accused of.
Read more »
DeepSeek's ChatGPT Rival Sparks Controversy Over Training MethodsChinese startup DeepSeek has caused a stir with its AI model, DeepSeek R1, which rivals ChatGPT in capabilities but was trained at a fraction of the cost. While DeepSeek's achievement is impressive, OpenAI alleges that DeepSeek utilized unethical methods, specifically 'distillation,' by training R1 on data from ChatGPT. This raises concerns about intellectual property theft and potentially violates OpenAI's terms of service. The situation echoes previous controversies surrounding ChatGPT's own training data, further highlighting the ethical complexities in the rapidly evolving field of AI.
Read more »
Microsoft Investigates Claims of DeepSeek's Illicit Training MethodsMicrosoft is probing allegations that Chinese AI firm DeepSeek utilized unethical practices to train its reasoning models, potentially violating OpenAI's terms of service by accessing its API for training purposes. This follows remarks by White House AI czar David Sacks, who suggested DeepSeek might have 'stole intellectual property' from the US. DeepSeek's rapid and cost-effective AI development has raised eyebrows, with speculation that it might have leveraged another company's model as a foundation. Microsoft alleges DeepSeek extracted a significant amount of code from OpenAI's API in late 2024. DeepSeek, hailed as an open-source platform, now faces scrutiny over its model development practices.
Read more »
DeepSeek AI Sparks Competition, Trump, Altman React to Chinese Startup's DisruptionChinese AI startup DeepSeek has shaken up the tech industry with its low-cost, high-performance AI models, prompting reactions from President Donald Trump and OpenAI CEO Sam Altman. Trump sees it as a 'wake-up call' for American industries to remain competitive, while Altman acknowledges DeepSeek's impressive capabilities while vowing OpenAI will deliver superior models. DeepSeek's R1 model, 20 to 50 times cheaper than OpenAI's o1 model, has fueled discussions about the competitiveness of Chinese AI firms.
Read more »