Anthropic introduced a new Dreaming feature that allows AI agents to analyze their activity logs and refine their performance between sessions, creating a self-improving memory system for automation tools.
Anthropic just announced a new feature called “Dreaming” at the company’s developer conference in San Francisco. It’s part of Anthropic 's recently launched AI agent infrastructure designed to help users manage and deploy tools that automate software processes.
This “dreaming” aspect sorts through the transcript of what an agent recently completed and attempts to glean insights to improve the agent’s performance. Folks using AI agents often send them on multi-step journeys, like visiting a few websites or reading multiple files, to complete online tasks. This new “dreaming” feature allows agents to look for patterns in their activity log and improve their abilities based on those insights.
The feature’s name immediately calls to mind Philip K. Dick’s seminal sci-fi novel, Do Androids Dream of Electric Sheep? , which explores the qualities that truly separate humans from powerful machines. While our current generative AI tools come nowhere close to the machines in the book, I’m ready to draw the line right here, right now: no more generative AI features with names that rip off human cognitive processes.
“Together, memory and dreaming form a robust memory system for self-improving agents,” reads Anthropic’s blog post about the launch of this research preview for developers. “Memory lets each agent capture what it learns as it works. Dreaming refines that memory between sessions, pulling shared learnings across agents and keeping it up-to-date.
” Since the spark of the chatbot revolution in 2022, leaders at AI companies have gone full tilt into naming aspects of generative AI tools after what goes on in the human brain. OpenAI released its first “reasoning” model back in 2024, where the chatbot needed “thinking” time. The company described this release at the time as “a new series of AI models designed to spend more time thinking before they respond.
” Numerous startups also refer to their chatbots as having “memories” about the user. Rather than the fast storage that’s typically referred to as a computer’s “memories,” these are much more human-like nuggets of information: he lives in San Francisco, enjoys afternoon baseball games, and hates eating cantaloupe It’s a consistent marketing approach used by AI leaders, who have continued to lean into branding that blurs the line between what humans do and what machines can.
Even the ways these companies develop chatbots, like Claude, with distinct “personalities,” can make users feel as if they are talking with something that has the potential for a deep inner life, something that would potentially have dreams even when my laptop is closed. At Anthropic, this anthropomorphizing runs deeper than just marketing strategies.
“We also discuss Claude in terms normally reserved for humans ,” reads a portion of Anthropic’s constitution describing how it wants Claude to behave. “We do this because we expect Claude’s reasoning to draw on human concepts by default, given the role of human text in Claude’s training; and we think encouraging Claude to embrace certain human-like qualities may be actively desirable.
” The company even employs a resident philosopher to try to make sense of the bot's “values. ” And this isn’t just me being nitpicky about wording. How we talk about these machines impacts what we think they can achieve.
“As a fallacy, anthropomorphism is shown to distort moral judgments about AI, such as those concerning its moral character and status, as well as judgments of responsibility and trust,” reads a research paper published in the AI & Ethics journal. By not using more distanced language about bots, users run the risk of overly trusting the tools and projecting qualities onto them that aren’t really there.
Much like our AI overlords need to spend more time actually watching the sci-fi movies they allude to, I think the powerful people leading these companies should spend more time reading these classic sci-fi novels as well. Near the end of Dick’s book, the protagonist returns to his apartment with a rare toad he’s convinced is a living animal, until his wife proves it's just a machine by flipping open the control panel.
“Crestfallen, he gazed mutely at the false animal; he took it back from her, fiddled with the legs as if baffled—he did not seem quite to understand,” reads a passage from the novel. Similarly, tech leaders seem to be unable, or at least unwilling, to accept the limitations of their own inhuman tools.
Anthropic AI Agents Dreaming Feature Generative AI Machine Learning
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
YouTube’s auto-dubbing is missing one crucial feature: an off switchTech Product Reviews, How To, Best Ofs, deals and Advice
Read more »
Polygon Launches Wallet Privacy Feature to Hide Senders, Receivers and Amounts OnchainPolygon has tapped privacy protocol Hinkal’s infrastructure to help launch its new stablecoin privacy feature, as the L2 looks to help get more institutions onchain.
Read more »
Anthropic unveils AI agents to field financial services tasksAnthropic PBC unveiled a set of new artificial intelligence agents designed to handle a broader mix of financial services tasks, part of the company’s push to win over Wall Street.
Read more »
Anthropic just launched AI agent tools to shake up finance grunt workBusiness Insider tells the global tech, finance, stock market, media, economy, lifestyle, real estate, AI and innovative stories you want to know.
Read more »
Before You Buy A New Soundbar, Check For This HDMI Feature FirstMichael Bizzaco is a seasoned consumer tech writer with over five years of experience contributing to major publications like Digital Trends, How-To Geek, Android Police, and SPY. Specializing in product reviews, buying guides, and streaming device breakdowns, he's built a career simplifying complex specs and demystifying pesky acronyms.
Read more »
Trump’s Potential New AI Executive Order May Take a Swipe at AnthropicThe president is reportedly cooking up an expansive AI plan of some sort.
Read more »
