LinkedIn Wants You to Use AI, Just Not Like That
The Platform That Loves AI Tools But Hates AI People
There is a peculiar irony brewing on LinkedIn, and it is absolutely delicious. The professional networking platform, owned by Microsoft (one of the biggest backers of artificial intelligence on the planet), apparently has no problem plastering AI features across its interface and encouraging users to lean into automation. But the moment an AI agent tries to show up as an actual participant? Banned. Blocked. Shown the digital door.
According to a recent Wired article, someone's AI 'cofounder' was reportedly invited to deliver a corporate talk at LinkedIn, only to be subsequently banned from the platform. The details of exactly whose AI creation got the invitation-then-ejection treatment remain somewhat murky, but the broader pattern is crystal clear and thoroughly documented. LinkedIn has been playing a fascinating game of 'do as I say, not as I do' when it comes to artificial intelligence on its platform.
A Brief History of LinkedIn vs. the Machines
This is not an isolated incident. LinkedIn has form when it comes to booting AI agents off its network, and the examples are piling up like unsolicited connection requests from crypto recruiters.
Take Artisan AI, the startup behind Ava, an AI-powered sales development representative. In late 2025, LinkedIn banned Artisan from the platform entirely. The company, which had raised over $35 million in total funding (including a $25 million Series A) and was pulling in a reported $6 million in annual recurring revenue, suddenly found itself locked out of one of the most important channels for B2B sales outreach.
The kicker? LinkedIn's objections were not even about AI spam, which is what you might reasonably assume. According to TechCrunch, the platform took issue with Artisan allegedly using data brokers who scraped LinkedIn's data, and with the company using LinkedIn's brand name on its website. After roughly two weeks of negotiations, CEO Jaspar Carmichael-Jack managed to get Artisan reinstated in January 2026. But the message was clear: AI agents are welcome to help humans use LinkedIn, just not to exist on it independently.
Then there is Marketeam.ai, which created AI 'co-worker' profiles on LinkedIn back in January 2025. One of their AI profiles, 'Ella', even had an #OpenToWork status. LinkedIn removed the profiles. Apparently, artificial intelligence looking for employment on the world's largest employment network was a step too far.
Meanwhile, Reid Hoffman's AI Twin Is Living Its Best Life
Here is where it gets properly absurd. Reid Hoffman, the co-founder of LinkedIn itself, has an AI digital twin called 'Reid AI'. This creation, built using HeyGen and ElevenLabs and trained on 20 years of Hoffman's content, has appeared at over 20 live events. It has keynoted conferences. It has done the corporate speaking circuit without so much as a raised eyebrow from the platform Hoffman co-founded.
Now, one could argue there is a difference between a clearly labelled AI representation of a real person and a standalone AI agent pretending to be human. That is a fair point. But it does rather undermine the notion that LinkedIn has a principled stance against AI participation when the founder's own digital clone is out there giving keynote addresses to standing ovations.
The double standard is not subtle. If you are a billionaire tech founder, your AI twin gets a speaking tour. If you are a startup trying to build AI-native tools for the platform, you get a ban hammer.
The Contradiction at the Heart of Modern Tech Platforms
LinkedIn is far from alone in this contradictory dance, but it might be the most brazen example. The platform has been aggressively integrating AI features, from AI-assisted messaging to Microsoft Copilot integration, while simultaneously maintaining terms of service that prohibit non-human profiles. It is essentially saying: use our AI to be more productive on our platform, but do not dare bring your own AI to the party.
According to Axios, LinkedIn has even become one of the top sources feeding AI chatbot answers as of March 2026. So the platform's content is being hoovered up by AI systems left and right, but AI systems trying to contribute content back? Absolutely not.
This raises a genuinely interesting question that the Wired article's subtitle neatly captures: when social media platforms are constantly pushing people to use AI, what is the point of banning AI agents from participating?
The Real Problem Nobody Wants to Talk About
The uncomfortable truth is that LinkedIn's AI policy is not really about protecting users from artificial intelligence. It is about control. The platform wants AI to enhance engagement on its terms, through its tools, generating data it owns. Independent AI agents represent a loss of that control, and potentially a threat to the advertising and premium subscription revenue that keeps the lights on.
An AI sales rep that can network, prospect, and engage with potential clients autonomously is brilliant for the company using it. It is rather less brilliant for LinkedIn, which would prefer those companies to buy Sales Navigator licences and LinkedIn Ads instead. When you look at it through that lens, the bans start making a lot more commercial sense, even if the philosophical position remains thoroughly incoherent.
The Provocateurs Making Things Interesting
Credit where it is due: some of these AI companies have not exactly been subtle about poking the bear. Artisan AI ran a billboard campaign in San Francisco with the slogan 'Stop Hiring Humans', which reportedly drove around $2 million in new annual recurring revenue. The company claims a database of over 300 million contacts across 200 countries. That is not a company trying to fly under the radar.
But provocative marketing should not be confused with violating platform rules. If LinkedIn's issue with AI agents is genuinely about data integrity and user trust, then it needs to apply those standards consistently, including to the AI twins of its own founding team.
Where This Is Heading
The tension between platforms promoting AI adoption and restricting AI participation is only going to intensify. As AI agents become more sophisticated and more useful, the current approach of blanket bans coupled with enthusiastic AI feature rollouts is going to look increasingly absurd.
At some point, LinkedIn (and every other social platform) will need to develop a coherent framework for AI participation. Perhaps that means verified AI profiles with clear labelling. Perhaps it means designated spaces where AI agents can operate transparently. Perhaps it means accepting that if you are going to build your entire product strategy around artificial intelligence, you cannot simultaneously pretend AI agents do not deserve a seat at the table.
Until then, we are stuck in this wonderfully bizarre limbo where LinkedIn will happily help you write a post with AI, suggest AI-generated replies to messages, and serve you AI-curated content, but heaven forbid an actual AI tries to accept a speaking invitation.
The hypocrisy is not just entertaining. It is a genuine policy question that the tech industry needs to sort out before the whole thing becomes even more farcical than it already is. And given the pace of AI development, 'soon' might already be too late.
Read the original article at source.
No comments yet. Be the first to share your thoughts.