Coordinated Inauthentic Behavior: How Fake Account Networks Manipulate What You See
How bot networks, troll farms, and fake accounts work together to manipulate online discourse. A plain-language explainer.
Coordinated inauthentic behavior — CIB for short — is when networks of fake accounts act together to make something look more popular, more controversial, or more real than it actually is. Think of it as a crowd of paid extras pretending to be a real audience.
How It Works
CIB operations follow a pretty standard playbook.
Step 1: Build the network. An operator creates dozens, hundreds, or thousands of fake accounts. Each gets a profile photo (often AI-generated or stolen), a backstory, and a posting history to look authentic. Some operations buy hacked accounts from real people instead — these come with built-in credibility. Step 2: Seed the content. A small number of accounts post the target content — an article, a talking point, a meme. The content itself might be true, false, or somewhere in between. CIB isn't always about lies. Sometimes it's about making a real position look more popular than it is. Step 3: Amplify. The rest of the network likes, shares, and comments on the content. This triggers platform algorithms. If enough accounts engage quickly, the content gets pushed into real people's feeds. It starts trending. Actual humans see it and some share it themselves, not knowing how it got there. Step 4: Engage and defend. Troll accounts jump into comment sections. They attack critics, defend the narrative, and derail opposing conversations. The goal isn't always to win arguments. Sometimes it's to make the conversation so toxic that normal people leave. Step 5: Rinse and repeat. When platforms catch and remove accounts, operators spin up new ones. The cycle continues.The whole thing can run on a budget of a few thousand dollars a month, or scale to industrial operations with hundreds of employees.
Real-World Example: Russia's Internet Research Agency
The most documented CIB operation in history ran out of a four-story building at 55 Savushkina Street in St. Petersburg, Russia. More than 1,000 employees worked there by 2015, according to reports from former workers and the U.S. Department of Justice indictment.
Staff worked 12-hour shifts. They ran fake American accounts on Facebook, Twitter, Instagram, and YouTube. They created fake activist groups — "Blacktivist" on Facebook had more followers than the real Black Lives Matter page. They organized real-world protests in the U.S. without ever setting foot in the country.
Each worker had daily quotas: a certain number of posts, comments, and new accounts. They were paid around $800 a month in cash. The whole operation cost roughly $1.25 million monthly — pocket change compared to a traditional intelligence operation.
The U.S. Department of Justice indicted 13 IRA employees and three related companies in February 2018. The operation didn't stop. It adapted and continued under new structures.
How to Spot It
Watch for unnatural patterns. Real conversations don't have 50 accounts posting the same talking point within minutes. If you see identical or near-identical language across multiple accounts on the same topic, that's a signal. Check account age and activity. Accounts created recently that post almost exclusively about one political topic are suspicious. Real people have varied interests. Look at posting times. An "American mom from Ohio" posting at 3 AM Eastern every night might actually be working a day shift in St. Petersburg. Notice the ratio. Accounts with thousands of posts but almost no followers — or the reverse — are often part of coordinated networks. Trust your gut on engagement. If a relatively obscure post has thousands of identical-sounding supportive comments, something's off.The Scale
Meta alone has disrupted over 200 covert influence operations from more than 60 countries since 2017. In Q1 2025, they took down CIB networks from China, Iran, and Romania.
China's Spamouflage operation is one of the largest ever tracked — it spans X, YouTube, Facebook, Instagram, TikTok, Tumblr, Blogspot, Quora, and Reddit. A NATO StratCom Centre of Excellence experiment in 2025 found that platforms remain "highly vulnerable to low-cost coordinated inauthentic behavior," even as detection improves.
The shift from human troll farms to AI-driven operations is accelerating. A 2026 paper in Science warned that autonomous AI agents can now manufacture fake consensus at a scale no human operation could match.
This article is part of the Albis Mechanism Library — explaining how information warfare works so you can see it. Explore all mechanisms →
Sources & Verification
Based on 4 sources from 3 regions
Keep Reading
Astroturfing: How Fake Grassroots Movements Are Manufactured
How states and organizations create fake grassroots support to simulate public consensus. A plain-language explainer.
Attention Hacking: How Operators Hijack What the World Pays Attention To
Attention hacking manipulates trending topics, timing, and algorithms to control public focus. Here's how it works.
Censorship Architecture: How States Control What You Can See Online
How governments build systems to control, filter, and shut down the internet. Technical methods explained in plain language.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.