Sam Altman posted a definition for artificial general intelligence (AGI), which is the goal AI makers are striving toward. But does this newest definition pass muster?
In today’s column, I take a close look at Sam Altman ’s latest indication of what artificial general intelligence is going to consist of. His recent missive in his online blog has been generally overlooked by the media as to the magnitude or significance of what he had to say.
Given that he is widely perceived as a bellwether of AI progress and attainment, his pointed remarks about AGI are worthy of due attention. You see, all manner of stakeholders associated with AI and even society as a whole are going to be tremendously impacted by the attainment of AGI -- so we ought to be looking carefully at the expected destination.This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities , artificial general intelligence , and artificial superintelligence at. Let me bring you up to speed quickly since it is entirely relevant here. I’ll focus on AI and AGI for the sake of brevity in this discussion. The field of AI initially got underway by coining and utilizing the AI moniker, which seemed satisfactory at the time. A general notion was that AI could be defined as a machine or computer-based system that exhibits human intelligence and thus represents an artificial form of intelligence. Gradually, the AI moniker got watered down and it chaotically referred to everything from a simple piece of software to the latest fastest servers. Another interleaving factor was that expert systems or knowledge-based systems gained popularity and were typically very narrow in their use of a kind of artificial intelligence. The AI community decided that perhaps a new way of depicting the vaunted or end-game style of AI was needed. They had grown weary of the AI as being anything including the proverbial kitchen sink. Instead, those advancing AI would aim for artificial general intelligence. AGI would be the topmost AI and would not only be narrow, such as the advent of expert systems, it would also need to be general, as in being able to exhibit the fuller general aspects of intelligence too. Here, I will set aside the AI moniker and proceed to focus mainly on the AGI moniker. If you are nonetheless keenly interested in the AI moniker, you might enjoy my analysis of the legal definitions associated with AI as used in the latest laws, regulations, and legal contracts, see my coverage atI would bet you are waiting with bated breath to see what AGI is defined as. I’m certainly glad you asked. Let’s concentrate on three definitions. One of them was recently posted by Sam Alman, CEO of OpenAI, while another one is firmly implanted in the official OpenAI business charter, and then a third is a working definition that I devised and have been routinely using in my columns.“AGI is defined as an AI system that exhibits intelligent behavior of both a narrow and general manner on par with that of humans, in all respects.”I’ll examine the third one, then the second one, and finally the first one. That order will make sense shortly.The third one in the bulleted list would objectively be a firmer or tighter definition for AGI in terms of stretching as far as we can go toward the topmost of AI .“AGI or artificial general intelligence is an AI system that exhibits intelligent behavior of both a narrow and general manner on par with that of humans, in all respects.” This is tough to achieve because it explicitly says that there must be both narrow and general intelligence exhibited, which must be on par with that of human intelligence and do so in all respects of what humans can do. You can’t cheat this definition by coming up with a really good AI-based chess-playing program and then brazenly claiming it is AGI. It wouldn’t qualify as AGI unless it could do all other intelligence aspects that humans can do.The second AGI definition is one that has been posted for quite a while on the official OpenAI company website and is shown on the “OpenAI Charter” statement webpage. Here’s the exact statement: “OpenAI’s mission is to ensure that artificial general intelligence —by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.”“AGI is defined as highly autonomous systems that outperform humans at most economically valuable work”A crucial element of their notion of AGI is that it only has to entail the so-called “most economically valuable work” – which is a mouthful. There are various ways to interpret this condition. Mostly, it seems to suggest that AGI only has to be considered as AGI if it is good enough to replace human labor in some unspecified industries and some unspecified economic output basis. Another loosey-goosey aspect is that AGI entails “highly autonomous systems” – which presumably is different than saying “fully” autonomous systems. There is a big difference there. A highly autonomous system could presumably still be semi-autonomous and not be fully autonomous, see my detailed explanation of those differences at I think it is fair and reasonable to assert that this AGI is a lot less of a vaunted or revered AGI than the definition I gave in the third bullet.Because hitting this so-called AGI as an end goal is a lot less demanding than trying to achieve the one that I’ve stated in the third bullet. The requirements are watered down. Now, so the trolls don’t go berserk, I am not saying that achieving this AGI isn’t still challenging. It is. Absolutely so. I’m just pointing out that it is not as challenging as the greater aspiration and, in some eyes, would be a letdown in comparison to where we ought to be going. It would be something akin to having a truly outstretched goal of wanting to reach the moon but settling instead of getting within some spitting distance of the moon. It just isn’t the same aspiration.In a posting entitled “Three Observations” by Sam Altman on February 10, 2025, on his blog, he provided a definition of AGI that said this: “AGI is a weakly defined term, but generally speaking we mean it to be a system that can tackle increasingly complex problems, at human level, in many fields.” Observe that he did acknowledge rightfully in his statement that AGI is a weakly defined term, meaning that the definition of AGI is all over the map and we don’t have a pristine all-approved version. I extracted the core portion of his AGI definition and provided this above in the first of the three bullet points: “AGI is defined as a system that can tackle increasingly complex problems, at human level, in many fields.”Well, I doubt there would be much argumentation that it is even looser than any of the other two definitions that I’ve so far covered here. The loosey-goosey is quite pervasive in this version. What do “increasingly complex problems” consist of? When referring to “many fields” the obvious question is which fields and how many are enough or sufficient for an alleged AGI to be heralded as AGI?As an aside, you might vaguely be aware that there is a lot of rumbling in the AI industry about the OpenAI and Microsoft contractual conditions as to their arrangement and when the arrangement will materially change if AGI is reached by OpenAI . Lots of speculation exists and numerous guesses have been reported widely. In any case, I only bring this up as an aside due to the footnote that Sam Altman included in his blog post, which said this: “By using the term AGI here, we aim to communicate clearly, and we do not intend to alter or interpret the definitions and processes that define our relationship with Microsoft. We fully expect to be partnered with Microsoft for the long term.” He then commented a bit further in the footnote, seemingly staving off potential legal gyrations. Just thought any AI insiders might get a kick out of that bacon-saving footnote.If an AI system is devised by some AI maker that can proficiently solve increasingly complex problems at a human level and do so in many fields, but not in all fields, and not for all manner of complex problems that humans could potentially solve, would you be willing to exclaim to the high heavens that we have finally and victoriously achieved AGI?For me, though I would be darned happy and excited at such a grand accomplishment, it just still doesn’t seem to match with any going-to-the-moon aspirational AGI. What will society as a whole think of any emboldened claims that we have reached AGI if the definition of that nature is considered the goal post? I think you can also see how it makes any predictions about the attainment of AGI in the near term a much more likely proposition. Why? Because it is a lesser version of what a revered AGI would consist of. Naturally, it is going to be likely sooner reachable. In a sense, the goalposts seem to have been moved closer to the kicker, making those extra-point kicks easier and sooner to achieve. The cheese has been moved. The mice won’t have to be quite as clever to find the cheese. Again, you still must give due credit to the mice for finding any cheese. That’s a big win unto itself. All this talk about cheese reminds me that the moon is supposedly made of cheese . Anyway, we can use a moonshot as yet another metaphor here. Suppose I said to you that a goal was to step on the moon. Before 1969, that was a nearly unimaginable goal. It was a tremendous ask. What if during the mid-1960s, the goal was adjusted to say that we want to land on the moon? Landing is different than stepping onto the moon. You could seemingly send an unmanned rocket that lands on the moon and proclaim you had succeeded in achieving the mission or goal.I vote that we stick with the higher goal. In fact, the next level would be to say that we ought to live on the moon. That would be taking the stepping on the moon to a heightened aspiration. In my view, that is essentially what ASI is about, but I’ll cover that in another upcoming column.Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.Insults, profanity, incoherent, obscene or inflammatory language or threats of any kindContinuous attempts to re-post comments that have been previously moderated/rejectedAttempts or tactics that put the site security at riskProtect your community.
Artificial General Intelligence AGI Artificial Superintelligence ASI Openai Chatgpt GPT-4O O1 O3 Definition Legal Law Regulation Stipulations Goals Mouse Cheese Moon Moonshot Sam Altman Future Predictions Existential Risk
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
The Power of Asking Questions: OpenAI CEO Sam Altman Predicts Future Value of InquiryIn an era of rapidly advancing AI, OpenAI CEO Sam Altman believes that the ability to ask insightful questions will become even more crucial than simply possessing vast amounts of knowledge. Altman argues that understanding how to formulate thought-provoking questions, both when interacting with humans and AI systems, will be a key driver of success. This shift in focus highlights the growing importance of critical thinking, problem-solving, and the ability to connect ideas in a meaningful way.
Read more »
Sam Altman posts letter from senators concerned about OpenAI efforts to ‘cozy up' to TrumpOn his X account, Sam Altman posted a letter signed by Democratic senators concerned about the ways tech companies appear to be bending to Trump’s wishes.
Read more »
Sam Altman denies AGI release as reports swirl about US government meetingsWith speculation online about OpenAI having reached the AGI stage of ChatGPT development, Sam Altman says that's not what's happening.
Read more »
Project Stargate: Trump Plots With Larry Ellison, Sam Altman On $500B AI InitiativeYes, that's the name of a 1994 Roland Emmerich movie. It's now a big infrastructure project to help power tech giants' foray into AI.
Read more »
Sam Altman: OpenAI will release 'better models' in response to DeepSeekBusiness Insider tells the global tech, finance, stock market, media, economy, lifestyle, real estate, AI and innovative stories you want to know.
Read more »
Sam Altman lowers the bar for AGIIn a new interview, OpenAI’s CEO downplayed the imminent arrival of artificial general intelligence, saying, “my guess is we will hit AGI sooner than most people in the world think and it matter much less.”
Read more »
