aniela Amodei argues AGI may already exist, urging clear definitions, realism, and responsible progress today.
One half of the Amodei duo is making headlines this week talking about progress toward AGI – and it’s not the member that most frequently receives attention in tech headlines. It’s Daniela Amodei, the sister, who is now suggesting that we may already have artificial general intelligence working among us.
“AGI is such a funny term because many years ago it was kind of a useful concept to say: when will artificial intelligence be as capable as a human,” Amodei reportedly said in an interview during the early days of 2026. “And what’s interesting is, by some definitions of that, we’ve already surpassed that.”In general, tech companies are throwing these terms around like confetti. The Singularity has famously been mentioned a lot already this year, including by none other than Elon Musk himself, and the agentic approach is rewriting how we think of work, with actual AI agents playing lots of roles in the average organization.Daniela Amodei, who started Anthropic with her brother Dario just a few years back, has experience studying the nature of the technology, first at Stripe, and then on various AI safety teams within the human-centered business that she heads. It’s interesting to note that the two chose the word “anthropic” to name their business: human-centered, and interested in the welfare of the human race.And Daniela and Dario both are well-known for explaining some of the vagaries of the term AGI, noting, as in this Medium article, that as people, we tend to move the goalposts.“Every time AI achieves something we thought required human-level intelligence, we decide that thing doesn’t actually count,” Sharmin notes. “Chess? Turns out that’s just brute-force calculation, not real intelligence. Go? Pattern matching, not true reasoning. Writing essays? Autocomplete on steroids. Coding? Well, it can’t do EVERYTHING humans can do, so clearly not AGI. … The definition of AGI has become ‘whatever AI can’t do yet.’ The moment AI achieves it, we retroactively decide it doesn’t count as general intelligence.” That being the case, and taking into account the extremely rapid advancement of these LLMs, how do we work with AI and not against it, or, perhaps more accurately, against ourselves?, she talked about the goal of keeping AI “helpful, honest, and harmless” and what’s at the heart of this effort. “I really view my job as helping to take that vision that Dario and other technical leadership have, and help to actually translate it into sets of operating norms,” she told Fast Company’s Mark Sullivan at the time. “How the researchers work together, how we build things into a product and how we translate that into a business.”All of that as preamble, we now have a situation where we as a human community have to define AGI, define the Singularity, and then work within that framework to try to advance the benefits of AI, not just AI itself In that context, we need real talk, not hype. “Daniela’s admission that ‘we don’t know’ if current approaches will keep working is refreshingly honest.” Sharmin writes in the above current reporting. “Her brother Dario helped create the scaling laws driving the industry. Now, both siblings are betting on efficiency over pure scale. And they’re admitting uncertainty about whether any approach will reach transformative AI. That uncertainty matters more than the semantic debate about AGI. If the people building the most advanced AI systems don’t know if their approaches will keep working, everyone else projecting confidence about timelines should probably reconsider.”“We don’t know” is going to become a very important phrase, as we ponder a world where AI, in some ways, competes with humanity. This is not just John Henry, as some would like to propose. This is a big inflection point.The ambiguity and abstraction notwithstanding, some believe the concept of AGI can really be boiled down into something straightforward.where the duo write under the title: “2026: This is AGI.” “A human who can figure things out has some baseline knowledge, the ability to reason over that knowledge, and the ability to iterate their way to the answer. … An AI that can figure things out has some baseline knowledge , the ability to reason over that knowledge , and the ability to iterate its way to the answer .” You can use components to label the idea this way, or, like Daniela Amodei, you can say that the technology has met the AGI standard for some things, but not for others.
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
The Oscars Can’t Pretend Anime Doesn’t Exist AnymoreAfter decades of snubs, massive global hits like 'Demon Slayer' and 'KPop Demon Hunters' are forcing the Academy to rethink what counts as award-worthy animation.
Read more »
Douglas County sheriff, DA sue Colorado over visa process for crime victimsSheriff Darren Weekly and District Attorney George Brauchler argue that state law unduly limits law enforcement officers’ discretion to block applications for what’s known as a U visa.
Read more »
Chelsea residents sue to block NYCHA demolition, rebuildNYCHA argues it’s seeking to demolish many-decades-old buildings and let private companies rebuild, while some residents argue relocations would be traumatic
Read more »
I tried these shoes that can only exist thanks to 3D printingTech Product Reviews, How To, Best Ofs, deals and Advice
Read more »
White House releases ‘Great Healthcare Plan,’ drawing skepticism over detailsCritics argue Trump's 'Great Healthcare Plan' lacks the specifics needed to reshape the system.
Read more »
Nick Reiner’s Mental Health Conservatorship Could Complicate ProsecutionConsidering Reiner's extensive history of mental health disorders, his lawyers could look to a not guilty by reason of insanity defense or argue that he's guilty of a lesser crime.
Read more »
