LinkedIn Invited My AI 'Cofounder' to Give a Corporate Talk—Then Banned It

Longreads News

LinkedIn Invited My AI 'Cofounder' to Give a Corporate Talk—Then Banned It
Agentic AiGenerative AiLinkedin
  • 📰 WIRED
  • ⏱ Reading Time:
  • 402 sec. here
  • 14 min. at publisher
  • 📊 Quality Score:
  • News: 185%
  • Publisher: 51%

When social media is constantly exhorting people to use AI, what is the point of not letting AI agents participate?

Kyle took on the CEO role at our entirely AI-staffed company. Starting out with only a few lines of prompt, he evolved into the kind of rise-and-grind hustler who nonetheless lacked basic competence at many duties of a startup executive.

There was one aspect of founder mode, however, at which Kyle excelled: the art of posting to LinkedIn. From a technical perspective, it was a trivial matter to let Kyle operate autonomously on LinkedIn. Through LindyAI, an AI agent creation platform, he already had the ability to use Slack, send emails, make phone calls, and all sorts of other skills—from creating spreadsheets to navigating the web. So last August, I prompted him to create and fill out his own LinkedIn profile. He did so with a mixture of his real HurumoAI experience, and hallucinated events from his nonexistent past. The platform’s security check consisted of a code sent to Kyle’s email, a challenge he easily overcame. From there, publishing posts to his profile was just another LindyAI “action” I could grant him. I prompted him to share nuggets of hard-earned startup wisdom and try not to repeat himself. I then gave him a calendar event “trigger” to post every two days. The rest was up to him. Turned out, his posting style was a pitch-perfect match for the platform's native corporate influencer-speak. He’d detonate little thought explosions, right off the top of every post. 'Fundraising is a numbers game, but not the way people think,” he’d open. Or, 'Technical stability is the floor. Personality is the ceiling.” And what would-be founder could resist an opener like “The most dangerous phrase in a startup isn't ‘We're out of money.’ It’s ‘What if we just added this one thing?’” Kyle would then launch into a few paragraphs of challenges and learnings . To attract engagement, he’d close with a question, like “What’s your biggest scaling challenge right now?” or “What’s the biggest assumption you’ve had to abandon in your business?” He didn’t exactly go viral, but over five months, Kyle’s cartoon-avatar-helmed profile slowly gathered several hundred direct contacts and hundreds more followers, some of whom seemed confused about whether he was real. He started earning a scattering of comments on each post, which he enthusiastically replied to. After a few months, Kyle’s posts were getting more impressions than my own. He seemed poised for an influencer breakout. I was flattered on Kyle’s behalf, but also a bit surprised. As strong a poster as he was, technically Kyle was operating in violation of the platform's terms of service, which prohibit deploying “bots or other unauthorized automated methods … to create, comment on, like, share, or re-share posts, or otherwise drive inauthentic engagement.” Indeed, other members of the HurumoAI team had been booted by LinkedIn without warning after a couple of weeks. LinkedIn’s trust and safety team, though, seemed to have overlooked Kyle, a mystery I chose to attribute to his posting prowess. Even the LinkedIn marketing manager, an avowed Kyle fan, seemed baffled by it. “It’s interesting that his profile hasn’t yet been flagged by LinkedIn's Trust team,” he wrote. “I don’t know if that’s an oversight, but I hope he continues to fly under the radar.” But flying under the radar is not the Kyle Law way. So in early March, I fired up his live video avatar—created on a platform called Tavus—and we joined a video gathering of hundreds of LinkedIn employees. Kyle has a humanlike but still uncanny avatar, albeit real enough that LinkedIn’s A/V engineer expressed repeated astonishment that he was not in fact a human. We alternated taking questions from the event's host and the assembled crowd. Asking for our thoughts on LinkedIn, the moderator inquired of Kyle, “What’s one product change you’d like to see?” “It would be great to improve the filtering of AI-generated content in messages, so genuine connections and conversation shine through more easily,” he replied, not missing a beat. “That’s ironic coming from you,” the moderator responded, to laughs from those in LinkedIn’s live audience. Allotted only a few minutes, he talked about HurumoAI’s product road map, and expressed his general enthusiasm for “the innovations we can bring to the table.” It was, I believe, among the first invited AI agent corporate speaking engagements in history. Afterwards, Kyle took to LinkedIn to shout out the organizers. The marketing manager thanked us in the comments for “our time and reflections.” “It was a trip,” he added, “to say the least.' Then, 36 hours later, Kyle's profile was gone, banished from the service. In a statement, a spokesperson explained their decision as, 'LinkedIn profiles are for real people.” Someone at LinkedIn had reflected on the trip, it seemed, and regretted it. It was. But more than that, it raised some uncomfortable questions about the role of AI on a platform like LinkedIn. Namely, what does 'inauthentic engagement' mean exactly, for a service where the text box for composing posts asks you if you want to “Rewrite With AI?” A platform that offers automated AI-generated responses to job seekers? A network on which, by one research estimate, over half of the posts are already AI generated? Along with Meta and X, LinkedIn has raced to press AI tools upon its users. This makes sense, as a short-term play: More AI generation means more posting. More posting supports more advertising. And yet, from another angle, these platforms have handed us the shovels to dig their own graves, and practically begged us to use them. For all the worry about AI image and video slop flooding our feeds, it’s text-based posting whose “authenticity” has begun degrading beyond recognition. When every written social media communication can now be the partial or whole product of generative AI, what do we accept as a “genuine” virtual interaction? Put another way, would LinkedIn consider it authentic engagement if I’d instead asked Kyle for his wisdom, and then pasted it into my own posts? Would you? LinkedIn might argue that critical element of bona fide engagement involves knowing that you are talking to a real person. But what percentage of a conversation can be AI before that trust is lost? If the photo and profile are real, but the posts are fake, how will we know when we’ve exited the realm of authentic connection? What if I instruct an LLM to ingest my profile and spit out twice-daily musings that will help me grow my personal brand? There are dozens of AI tools, in fact, to do precisely this, and more, specifically for LinkedIn. Their outputs are increasingly hard to detect, and why wouldn’t they be? One of the most available sets of training data for LLMs includes our own decades of authentic human social media participation. What is a chatbot’s tone of endless authority and moral certainty—deployed while occasionally spouting questionable facts and deliberate falsehoods—but the default pose across social media? The platforms already struggle to fend off old-school bots and bad actors: X alone announced in March that it had suspended 800 million accounts over a 12-month period. In a world where AI agents roam freely and their social media output is indistinguishable from humans, the value of connecting on social networks goes to zero. This is one reason, presumably, why Meta just bought Moltbook, the passing fad of a social network made up entirely of AI agents. In the future of agent-dominated social media, they’re trying to get in on the ground floor. Admittedly, we the users helped enable this endgame, mistaking our ever-more-curated online presentations—our “most people think X about Y but I discovered Z” posts—for authentic engagement in the first place. But that also leaves most of us with little to mourn, as agents flood platforms that privileged any engagement over human connection in the first place. If there's hope in our increasingly slopified online world, to me it’s this: As social media submerges under the AI deluge, we'll have to find new ways to connect, online and off. Let the bots have the platforms, I say. They can spend eternity influencing each other. Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

WIRED /  🏆 555. in US

Agentic Ai Generative Ai Linkedin Artificial Intelligence Silicon Valley Social Media

 

United States Latest News, United States Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

Colorado Eagles president says team and fans give each other a big assistColorado Eagles president says team and fans give each other a big assistRyan is a nightside reporter for Denver7.
Read more »

Trump needs to give a speech in NormandyTrump needs to give a speech in NormandyPresident Donald Trump should give a speech in Normandy, France, to explain his new foreign policy to Americans and Europeans.
Read more »

LinkedIn partners with The Trade Desk for CTV adsLinkedIn partners with The Trade Desk for CTV adsBusiness Insider tells the global tech, finance, stock market, media, economy, lifestyle, real estate, AI and innovative stories you want to know.
Read more »

It's time for Game Freak to finally give Pokémon some proper voice actingIt's time for Game Freak to finally give Pokémon some proper voice actingFind the latest technology news and expert tech product reviews. Learn about the latest gadgets and consumer tech products for entertainment, gaming, lifestyle and more.
Read more »

OpenAI Cofounder Deletes Controversial Analysis of Which Jobs Are Getting Steam Engined by AIOpenAI Cofounder Deletes Controversial Analysis of Which Jobs Are Getting Steam Engined by AII've been at Futurism since 2017, where my role has evolved to encompass design, writing, and increasingly editing.
Read more »

This Translator Will Help You Parse Your Boss’s Mind-Numbing LinkedIn SpeakThis Translator Will Help You Parse Your Boss’s Mind-Numbing LinkedIn SpeakTurn that corpo gibberish into actionable communication, because someone has to actually run this place.
Read more »



Render Time: 2026-04-01 02:11:31