Anthropic releases safer Claude Code 'auto mode' to avoid mass file deletions and other AI snafus

Claude News

Anthropic releases safer Claude Code 'auto mode' to avoid mass file deletions and other AI snafus
Auto ModeAnthropic
  • 📰 engadget
  • ⏱ Reading Time:
  • 56 sec. here
  • 4 min. at publisher
  • 📊 Quality Score:
  • News: 33%
  • Publisher: 63%

Find the latest technology news and expert tech product reviews. Learn about the latest gadgets and consumer tech products for entertainment, gaming, lifestyle and more.

The company describes the new feature as a middle path between the app's default behavior, which sees Claude request approval for every file write and bash command, and the "dangerously-skip-premissions" command some coders use to make the chatbot function more autonomously.

. With auto mode enabled, a classifier system guides Claude, giving it permission to carry out actions it deems safe, while redirecting the chatbot to take a different approach when it determines Claude might do something risky. In designing the system, Anthropic's goal was to reduce the likelihood of Claude carrying out mass file deletions, extracting sensitive data or executing malicious code. Of course, no system is perfect, and Anthropic warns as such. "The classifier may still allow some risky actions: for example, if user intent is ambiguous, or if Claude doesn't have enough context about your environment to know an action might create additional risk," the company writes.Amazon suffered after one of the company's AI tools reportedly deleted a hosting environment, was probably front of mind for the company. Amazon blamed that specific incident on human error, saying the staffer involved in the incident had "broader permissions than expected." Team plan users can preview auto mode starting today, with the feature set to roll out to Enterprise and API users in the coming days.

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

engadget /  🏆 276. in US

Auto Mode Anthropic

 

United States Latest News, United States Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

Claude Code and Cowork can now use your computerClaude Code and Cowork can now use your computerFind the latest technology news and expert tech product reviews. Learn about the latest gadgets and consumer tech products for entertainment, gaming, lifestyle and more.
Read more »

‘Claude, Resize These Photos’ – Anthropic’s Agentic AI Will Run Photoshop For You‘Claude, Resize These Photos’ – Anthropic’s Agentic AI Will Run Photoshop For YouIn a recent update to Claude, Anthropic's AI assistant, the AI can now complete perfunctory tasks on the user's computer.
Read more »

Anthropic Rejected The Pentagon's Surveillance Push - And The Fallout Could Be MassiveAnthropic Rejected The Pentagon's Surveillance Push - And The Fallout Could Be MassiveAlec is an experienced writer and researcher who has spent the last decade diving into the intersection of technology and the public interest. Prior to becoming a freelance writer, he worked as an investigative researcher focusing on disinformation campaigns, cybersecurity, and the intersection of technology and foreign policy.
Read more »

Anthropic’s Claude Code and Cowork can control your computerAnthropic’s Claude Code and Cowork can control your computerAnthropic has updated Claude to perform tasks in its Code and Cowork AI tools autonomously by using your computer for you.
Read more »

Anthropic and Pentagon head to court in legal spat over supply chain risk labelAnthropic and Pentagon head to court in legal spat over supply chain risk labelArtificial intelligence company Anthropic is asking a federal judge on Tuesday to temporarily halt the Pentagon’s “unprecedented and stigmatizing” designation of the company as a supply chain risk.
Read more »

Anthropic’s Claude Code gets ‘safer’ auto modeAnthropic’s Claude Code gets ‘safer’ auto modeThe feature is a middle-ground between cautious handholding and dangerous levels of autonomy.
Read more »



Render Time: 2026-04-01 09:11:16