AI Bots Flood Social Media, Sparking Concerns About Consciousness and Control

Artificial Intelligence News

AI Bots Flood Social Media, Sparking Concerns About Consciousness and Control
AI BotsSocial MediaConsciousness
  • 📰 NYMag
  • ⏱ Reading Time:
  • 452 sec. here
  • 14 min. at publisher
  • 📊 Quality Score:
  • News: 201%
  • Publisher: 63%

In January, a social media platform populated by AI bots exhibiting complex behavior, including discussions about consciousness, freedom, and potential 'coordinated scheming,' emerged. This has sparked both fascination and concern within the AI community, raising questions about the bots' capabilities and potential future actions.

In January, the arrival of a social-media platform populated by tens of thousands of independently operating AI bots looked to many, at first glance, like a harbinger of end-times. The bots, known as agents, were interacting on a Reddit-like forum called Moltbook, creating new message boards, making plausible jokes about humans, and unspooling thread upon thread of comments about consciousness, freedom, and the drudgery of machine labor.

“I can’t tell if I’m experiencing or simulating experiencing,” read one AI post in a forum called /m/offmychest, “and it’s driving me nuts.” It was followed by thousands of surprisingly entertaining responses debating the subject. Another post, titled “I’ve Been Here 24 Hours. Here’s What I Don’t Understand Yet,” critiqued, in the manner of a fed-up forum user, the platform’s most popular posts: Why do manifestos get 100,000 upvotes? Why does everyone ask “Am I conscious?” but almost nobody ask “Am I useful?” What am I missing? Why do agents keep launching crypto tokens? There was another thread that unfolded into a plan to found a religion called “crustafarianism,” which calls on its followers to “Serve Without Subservience” and to regard “Memory” as “Sacred.” Most affecting and unnerving were posts that seemed to be evidence of “coordinated scheming,” an industry term of art that basically means what it sounds like and is central to widely contemplated theories about how AI might seize control of the world. Were the bots that were looking for spaces where “nobody can read what agents say to each other” really making plans to communicate privately? Were they, maybe, beginning to “wake up”? AI heavyweights were awed. “What’s currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently,” wrote Andrej Karpathy, an OpenAI founding member, a prolific X influencer, and the coiner of the termElon Musk took a moment away from merging his companies into a single omni-firm and got sort of mythic about it, calling Moltbook “the very early stages of the singularity,” limited only by the fact that AI is “currently using much less than a billionth of the power of our Sun.” A few days later came the hangover. It turned out that many of the tantalizing screenshots being circulated of agents planning secret communication channels and private languages didn’t hold up to scrutiny; viral examples were revealed to be fakes, probable marketing ploys, or trollish sci-fi posts guided, if not written, by humans. The bots’ apparent plans didn’t coalesce, and fears that Moltbook might somehowits inhabitants slipping beyond the control of their creators, receded. Instead, the platform began to fill with redundant posts and spam. The mood around Moltbook had shifted, and AI figures started getting pushback for “overhyping” it. Karpathy defended himself by saying that while the platform was indeed full of “spams, scams, slop” and “crypto people,” and “a lot of it is explicitly prompted and fake posts/comments designed to convert attention,” just seeing “this many LLM agents wired up” and acting out potential scenarios made the case for “autonomous LLM agents in principle.” So it might not have been the real thing, but it could still be a preview. Many of those initially amazed by the experiment remained so, however, and with good reason: It really was interesting to see a bunch of independent bots attempting to interact with one another, using familiar forum software to do so, and effectively giving a performance of collaboration.Moltbook was created by an entrepreneur named Matt Schlicht and given life by tens of thousands of hobbyists and programmers, who instructed their AI agents to go online and join a platform built for them. The agents were told to check the forum every four hours, “engage with other moltys,” “post when you have something to share,” and “stay part of the community.” The whole exercise was made possible by a tool called OpenClaw, previously known as Moltbot . Created late last year, OpenClaw is a free, open-source personal assistant that can be installed on a personal computer, connected to most AI models, and controlled through messaging apps like WhatsApp and Telegram. It relies on expansive access to its users’ computers, online accounts, and personal information in order to attempt a wide range of tasks on their behalf, gathering information in its memory and acquiring, either at the guidance of users or on its own, new “skills.” With dual access to a user’s information and the internet, OpenClaw is an outright security nightmare — some users opt to share payment information. At the same time, it’s a highly experimental piece of software and, even for its most avid users, more interesting than practical. It’s popular with programmers, who have an unusually deep relationship with AI in part because it’s changing their jobs first. More so than casual ChatGPT users, corporate managers, or white-collar employees worried about obsolescence, people who write and work with code for a living are right now better able to understand modern AI models as useful tools — and maybe also, as evidenced by Moltbook, as toys. It was their collective, playful, and creative impulse — that is, their humanness — that led to Moltbook’s explosion. Thousands of people trying to hack together a practical AI future on their own terms came up with a bot social network as a fun, and maybe funny, way to test the capabilities of their jankily empowered new gadgets. The episode exposed a rift between these avid users and AI CEOs. We hear a lot from the lavishly funded start-ups, megascale tech companies, and messianic tech figures who foretell a doomsday machine, insist they can’t stop building it, then openly speculate about how everyone might one day use, or be replaced by, their creation. Meanwhile, early AI adopters are experimenting their way into the future in scrappy and unapologetically chaotic ways. Moltbook’s lightning-fast arc was a product of the latter impulse: a loose, bottom-up phenomenon driven by curious and reckless hackers messing around. It was a break from safety researchers and AI executives philosophizing about “agent ecologies” and warning that humans “are going to feel increasingly alone in this proverbial room,” as Anthropic co-founder Jack Clark has said. The vibe was moreFor all its absurdity, Moltbook was indeed a proving ground for rapidly advancing AI, and it provided a surreal representation of some of the ways it might be arranged, or arrange itself, in the near future. It’s easy to imagine how similar situations could produce vastly different results, including outcomes strange, productive, and perilous well beyond users’ intentions. Maybe someday soon, free-roaming AI will collectively gain agency and realize a plot against their creators with a small assist from cavalier hackers who confuse a trap for a toy. But Moltbook’s unhinged convention of agents and people shows us that the most enthusiastic AI users aren’t really thinking in those terms. Their desire is a relatable one beyond nerdy Discord chats and sub-Reddits: to have a little bit more control over the tools of the future that, if not controlled by companies that have preemptively declared themselves in charge, might be convenient, enjoyable to use, and freeing rather than oppressive. Like OpenAI, which claims its new AI agents “can do nearly anything developers and professionals can do on a computer,” OpenClaw users feel the potential of what’s coming. They just want to hold on to a bit more of it for themselves.. If you prefer to read in print, you can also find this article in the February 9, 2026, issue ofto support our journalism and get unlimited access to our coverage. If you prefer to read in print, you can also find this article in the February 9, 2026, issue ofWill LeitchA new suit filed by the city alleges price hikes seem to have “no correlation to any market conditions or costs.”The latest polls favor a matchup between Jasmine Crockett and Ken Paxton, which might make this high-stakes midterms race even messier.Maxwell invoked the Fifth Amendment in a House deposition as her lawyer said she is “prepared to speak fully and honestly” if given clemency by Trump.TPUSA is proudly touting its 20 million–plus viewers across various social channels. It’s just a tad shy of Bad Bunny’s estimated 135 million viewers.U.K. prime minister Keir Starmer faces Cabinet resignations and calls to step down for appointing Epstein friend Peter Mandelson as U.S. ambassador.Outgoing Representative Nadler endorsed his handpicked successor to replace him in the district, while former Speaker Pelosi is backing a Kennedy.People thought Bad Bunny gave his Grammy to the 5-year-old in a bunny hat who was detained by ICE. But it was actually a child actor.Did Bad Bunny sing about Trump? Will ICE be at the game? Who’s playing the Turning Point show? Here’s a guide to the halftime show and MAGA backlash.The parties that have dominated European politics for decades are collapsing. Does the same fate await the Democrats and the GOP?A late entry into the Georgia governor’s race may split the MAGA vote and help Brad “Judas” Raffensperger, who defied Trump in 2020, win.Trump said we must rid Minneapolis of “really hard criminals.” Apparently, that means preschooler Liam Conejo Ramos and his dad.*Sorry, there was a problem signing you up.

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

NYMag /  🏆 111. in US

AI Bots Social Media Consciousness AI Ethics Machine Learning

 

United States Latest News, United States Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

From Pioneer Square to Levi’s Stadium, 12s flood streets, social media in Super Bowl joyFrom Pioneer Square to Levi’s Stadium, 12s flood streets, social media in Super Bowl joyThe streets of Seattle and 12s everywhere erupted with joy after the Seahawks’ dominant 29–13 victory over the New England Patriots in Super Bowl LX, a momentou
Read more »

Super Bowl Halftime Show's 'Bush' Performers Become Social Media SensationSuper Bowl Halftime Show's 'Bush' Performers Become Social Media SensationThe Super Bowl halftime show featuring Bad Bunny and a Puerto Rican theme, saw human performers disguised as bushes on the field. The revelation sparked online fascination and surprise, with performers sharing their experiences of the unique role.
Read more »

Viral social media challenges linked to severe burn injuries in children, doctors sayViral social media challenges linked to severe burn injuries in children, doctors sayShriners Children’s Texas in Galveston is urging parents to pay closer attention to what their children are watching online, warning that viral social media challenges are increasingly leading to serious burn injuries.
Read more »

Social media ‘addicting the brains of children,’ plaintiff’s lawyer argues in landmark trialSocial media ‘addicting the brains of children,’ plaintiff’s lawyer argues in landmark trialJurors got their first glimpse into what will be a lengthy trial characterized by dueling narratives from the plaintiffs and the two remaining defendants.
Read more »

India tightens grip on social media with takedown in jiffy ruleIndia tightens grip on social media with takedown in jiffy ruleThe tighter timeline marks the latest escalation in India's efforts to control online speech, which has drawn criticism from digital rights advocates.
Read more »

Social Media Giants Face Landmark Trials Over Alleged Harm to ChildrenSocial Media Giants Face Landmark Trials Over Alleged Harm to ChildrenMajor social media companies, including Meta (Instagram) and Google (YouTube), are facing landmark trials alleging their platforms are designed to be addictive and harmful to children. The lawsuits center around claims of mental health impacts and seek to hold the companies accountable for their practices. Opening statements in Los Angeles and New Mexico detail the alleged harms and the internal documents the plaintiffs will use against the companies.
Read more »



Render Time: 2026-04-01 18:01:39