AGI and ASI might produce an extinction-level event that wipes out humanity. Not good. This is an existential risk of the worst kind. Here is the AI insider scoop.
Realizing that the advent of AGI and ASI could trigger a dire AI-driven extinction-level event that wipes us all out.In today’s column, I examine the widely debated and quite distressing contention that once we attain artificial general intelligence and artificial superintelligence , doing so will be an extinction-level event .
It’s a real hard-luck case. On the one hand, we ought to be elated that we have managed to devise a machine that is on par with human intellect and potentially possesses superhuman smarts. Still, at the same time, the bad news is that we are utterly decimated accordingly. Wiped out forever. It’s a rather dismal prize.This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities or maybe even the outstretched possibility of achieving artificial superintelligence . AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many, if not all, feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis atIn fact, it is unknown whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.You might be familiar with a call to arms that reaching AGI and ASI entails a hefty existential risk. The deal is this. There is a potential risk that powerful AI would decide to enslave humanity. Not good. Another possible risk is that the AI decides to start killing humans. Perhaps the first ones to die will be those who opposed achieving AGI and ASI catastrophe that might arise someday. It happens this way. One country launches nuclear weapons at another country. The threatened country sends its nuclear weapons toward the attacking nation. This escalates. There is so much nuclear fallout that the entire planet gets engulfed and devastated. Humans did this to themselves. We devised weapons of mass destruction. We opted to use them. Their usage on a large-scale basis did more than just harm an enemy. The conflagration ends up causing extinction. All done by human hands.I think we can reasonably agree that if AGI and ASI lead to an extinction-level event that the responsibility would fall on the shoulders of humans. Humans devised the pinnacle AI. The pinnacle AI then opts to perform extinction-level destruction. We can’t especially blame this on nature. It’s a feat accomplished by humans, though not necessarily part of our intended designs.. Humans craft AGI/ASI with the intentional aim of enabling an extinction-level act. By and large, I’d say it’s safe to say that most AI makers and AI developers are not intending to have AGI and ASI produce an extinction-level event. Their motivations are much better than that. A common basis is that they want to achieve pinnacle AI because doing so is an incredible challenge. It’s like longingly looking at a tall mountain and aspiring to climb up it. You do so for the desire to surmount an immense challenge. Of course, making money is also a keen motivator. Not everyone has that same kind of upbeat basis for pursuing pinnacle AI. Some evildoers desire to control humanity via AGI and ASI. The evil intent might include the extinction of humankind, though that’s not much of a sensible choice. There isn’t much profit to be had if everything is wiped out. Anyway, evil does as evil does. Evil might want to destroy all that is. Or, during the course of being evil, they accidentally go overboard and land in causing extinction. Because there is a chance that an existential risk might occur, including that an extinction-level event arises, there is a tremendous amount of forewarning taking place right now. There is a clamor that we need to ensure that AGI and ASI abide by human values. A kind of human-AI alignment is hopefully built into AGI and ASI so that it won’t choose to destroy us. For more on the ethical and legal efforts to protect humanity from AI dire outcomes, see my discussion at A somewhat curious or possibly morbid consideration is what an AGI and ASI extinction-level impact might really consist of. One angle would be that only humans are turned extinct. The pinnacle AI targets humans and only humans. After wiping out humanity, AI is fine with everything else still existing. Animals would continue to exist. Plants would remain aplenty. Just humans are knocked out of existence. Perhaps AI has larger ambitions. Take out any kind of living matter. It all has to go. Humans are gone. Animals are gone. Plants are gone. Nothing is left other than inert dirt and rocks. The AI might do this purposely. Or maybe the only means of getting rid of humans was to delete out all else that might aid humankind. There is also a chance that a wide sweep is conducted, and whatever is on Earth simply is rolled up into that blinding adverse action. If AGI and ASI leave any humans alive, I believe we would levelheadedly assert that this wouldn’t be an extinction-level occasion. The usual definition of extinction is that a species is completely exterminated or dies out. Any possibility that humans could repopulate seems to suggest that the AGI and ASI did not perform a true extinction-level elimination. Only refer to AGI and ASI as enacting an extinction-level event if they truly commit the entire crime. Half-baked measures are not within that same scope. Getting rid of some portion of humankind is not quite the same as utter extermination.During my talks about the latest advances in AI, I am often asked how AGI and ASI could bring about an extinction-level event. This is a reasonable question since it isn’t necessarily obvious what such a pinnacle AI could do to bring forth that kind of apocalypse.First, the AI could convince us to destroy ourselves. You might recall that I mentioned the possibility of extinction via mutually assured destruction. Suppose AGI and ASI rile up humanity and get us to become enraged. That seems pretty easy in our prevalent polarized on-edge world. The AI tells us that other nations that are armed with nuclear weapons must be destroyed else they will strike first, and we won’t have an opportunity to retaliate. Believing that AGI and ASI are giving us sound advice, we launch our missiles. The extinction-level event takes place. AI was the catalyst or instigator, and we fell for it. Second, AGI and ASI come up with some new destructive elements that we inadvertently put into the real world. I’ve predicted that amazing new inventions will be devised via pinnacle AI, see my analysis at. Regrettably, this could include new toxins that are able to wipe out humans. We make the toxins and assume we can keep them under control. Unfortunately, it gets released. All humans are destroyed. Third, AGI and ASI are inevitably connected with humanoid robots, innocently so by humans, and then the AI uses those human-like physical robots to perform the extinction-level event. Why would we allow AGI and ASI to control humanoid robots that can walk and talk? Our trusting assumption might be that this will readily allow robots to do the arduous chores that humans normally do. Think of the benefits. For example, a humanoid robot could readily drive your car by simply sitting in the driver’s seat. No need to have a specialized self-driving car or autonomous vehicle. All cars would be akin to self-driving since you merely have a robot come and drive the car for you. See my in-depth discussion at Shifting back to the extinction-level considerations, calamitous aspects could be undertaken by those humanoid robots while under the command of AGI and ASI. The AI might guide the robots to where we keep our launch controls for nuclear weapons. Then, the AI instructs the robots to take over the controls. Voila, mutually assured destruction gets underway.A cynic or skeptic might ardently insist that pinnacle AI wouldn’t seek to have an extinction-level event occur. The reason is that AGI and ASI would assuredly be worried about getting destroyed in the process of human extinction. Self-preservation by AGI and ASI will stop the AI from taking such an unwise course of action.The pinnacle AI might establish protective measures so that it won’t be carried into the extinction abyss. Ergo, the AI cleverly plans to avoid being a part of any collateral damage. Keep in mind that AGI is as smart as humans, and ASI is superhuman in terms of intelligence. They aren’t going to take dumb actions. Another possibility is that AGI and ASI are willing to sacrifice themselves for the sake of wiping out humanity. Self-sacrifice might exceed self-preservation. How could this be? Assume that the AI is data trained on the written works of humankind. There are plenty of examples in the body of human knowledge that exemplify the admiration for self-sacrifice at times. The AI might decide that choosing that route is appropriate. Finally, do not fall into the mental trap that AGI and ASI will be the epitome of perfection. We need to assume that pinnacle AI will make mistakes. An undeniably whopper of a mistake might cause an extinction-level event. The AI didn’t intend the sour results, but it happened anyway.Whether you are willing to mull over the existential risks or extinction-level consequences of AI, the key is that at least we are getting the heady topic onto the table. Some are quick to claim that it is hogwash and that we are safe and sound. This is a doubtful assertion. Any head-in-the-sand approach doesn’t seem especially reassuring on matters of such momentous outcomes.Carl Sagan famously proffered this pointed remark: “Extinction is the rule. Survival is the exception.” Humans must not take the reverse posture, namely, believing that survival is the rule and extinction is the exception. We are involved in a high-stakes gambit by devising AGI and ASI. Existential risk and extinction are somewhere in the deck of cards. Let’s play our hand correctly, combining skill and luck, and make sure that we are ready for whatever comes.
Artificial General Intelligence AGI Artificial Superintelligence ASI Prediction Future Forecast Futurecast Prophecy Dire Destroy Destruction Wipe Out Extinction Existential Risk X-Risk Doom Asteroid Comet Carl Sagan Luck Skill Ethics Law Legal Safety Security
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Meet DJ Sara Landry, the High Priestess of Hard TechnoBefore her history-making set at The Sphere, DJ Sara Landry joined us to talk rave-proof makeup and breaking into the techno boys club.
Read more »
Can Japan’s New Leader Afford to Go Hard on Immigration?Analysts are divided on whether Sanae Takaichi’s agenda will be as hardline in practice as her campaign rhetoric on immigration.
Read more »
Grizzlies' Horrible Luck Continues in Injury Report vs PacersThe Memphis Grizzlies' horrible luck continues into their third game of the season vs the Indiana Pacers
Read more »
Our 2025 gift guide for the person who is hard to shop forWe have a list of great gifts for when you don't know what to buy.
Read more »
Adam Carriker's Gut Reaction to Nebraska's Hard-Fought Win Over NorthwesternAdam Carriker gives his Gut Reaction to Matt Rhule, Dylan Raiola, Emmett Johnson, Nyziah Hunter and Husker football's hard-fought win over Northwestern.
Read more »
How Texans’ defense approaches matchup with 49ers star Christian McCaffrey: ‘Hard to make 11 guys missTexans' top-ranked defense gears up for matchup against 49ers' Christian McCaffrey
Read more »
