Deepfakes Just Broke Identity Verification
Deepfakes can now fool the systems banks and apps use to verify you're real. The tools built to catch fake videos are failing.
Deepfakes can now fool the systems banks and apps use to verify you're real. The tools built to catch fake videos are failing.
One in four Americans has received a deepfake voice call in the past year. That's according to Hiya, a company tracking phone scams. The calls sound like family members asking for money. Or bosses demanding wire transfers. They're AI-generated voices trained on real recordings.
But voice scams are the visible part. The deeper problem is identity verification.
When KYC Systems Stop Working
Banks and fintech apps use "know your customer" systems to verify new accounts. You take a selfie. The system checks it's really you. Deepfakes are breaking that process.
A security report published yesterday details how synthetic faces are bypassing biometric checks. The attacks aren't random. They're targeted at account opening, loan applications, crypto exchanges. Places where a fake identity unlocks real money.
Traditional detection tools look for visual glitches. Weird shadows. Unnatural skin texture. But 2026 deepfakes don't have those tells anymore. AI detection software hits 97% accuracy on deepfake images. That sounds good until you realize 3% of fakes get through. At scale, that's thousands of fraudulent accounts.
The twist: humans beat AI at detecting deepfake videos. A University of Florida study found people spot fake videos better than machines. We pick up on tiny behavioral cues software misses. But you can't scale human review to millions of daily verifications.
From Influence to Infrastructure
Information warfare used to target minds. Now it's targeting systems.
Russia's running a digital propaganda campaign designed to erode Western support for Ukraine. The Atlantic Council tracked narratives about Ukrainian corruption and NATO aggression spreading across social platforms. The goal isn't to change your mind in one post. It's to shift the baseline of what seems plausible over months.
A documentary premiering this week shows how that propaganda starts young. Russian primary schools now have mandatory patriotic education. The curriculum teaches kids to support Putin and the war. One teacher filmed it secretly. His footage shows the propaganda machine at the source.
Meanwhile, Armenia's been hit with a fake news operation impersonating CNN, Reuters, and Bloomberg. The articles look real. Same logos, same layouts. But the stories are fabricated. Researchers call it a combination of Storm-1516 (sensational AI content) and Doppelgänger (media impersonation). The blend makes it harder to spot.
Cyber Meets Kinetic
The U.S. operations against Iran last week showed something new. Cyber attacks weren't separate from airstrikes. They were fused.
Australian defense analysts describe it as an integrated assault on technology ecosystems. The objective isn't just destroying hardware. It's blinding sensors, distorting information, and appealing to both civilians and commanders. Bombs hit infrastructure. Cyber operations hit the information layer that makes infrastructure work.
When a nation can't trust its own military communications, kinetic force becomes harder to coordinate. When civilians can't verify what's real, panic spreads faster than facts. That's the new battlefield.
The Surveillance Response
One way to fight deepfakes is more surveillance. Verification systems that track how blood flows through your face. Software that analyzes how you turn your head. Biometric checks so detailed they're nearly impossible to fake.
But more surveillance creates new problems. A Gen Z team just launched "Eyes on AI," a tool that helps people assess their surveillance risk. It maps which companies are tracking you, what AI systems have your data, and how your routine is being profiled.
The site warns: "Warrantless surveillance is entirely legal and artificial intelligence is powering it." That's the trade-off. Better security through deeper tracking. More safety, less privacy.
What's Actually Happening
Information warfare isn't one thing anymore. It's deepfake voices draining bank accounts. It's propaganda campaigns starting in elementary schools. It's fake news sites cloning major outlets. It's cyber operations integrated with missiles.
The common thread: information as a weapon is moving from persuasion to deception. It's not about changing what you believe. It's about breaking the systems you rely on to know what's true.
Climate disinformation is warping disaster responses. Fake social posts during Hurricane Helene included Russian propaganda. Wildfire hoaxes spread during the Los Angeles fires. When people can't trust emergency information, they make worse decisions.
West African content creators met last week to pledge against "commercialized disinformation." One admitted he used to spread false stories for money. Training changed his approach. That matters because information warfare works partly through people who don't realize they're part of it.
Where This Goes
Detection technology will improve. Blockchain-based authenticity verification is being tested. Watermarking systems are getting better. But every defense creates new attack vectors.
The deeper issue is trust. When identity systems fail, we need backup verification methods. When news outlets get cloned, we need ways to verify sources. When cyber and kinetic operations blend, we need new frameworks for what war looks like.
AI-powered surveillance might catch more deepfakes. It'll also create detailed profiles of everyone. That's not a hypothetical trade-off. It's happening now.
The information war isn't coming. It's here. It's hitting identity systems, propaganda pipelines, military operations, and disaster responses. The question isn't whether information can be weaponized. It's whether we can build systems resilient enough to function when it is.
Keep Reading
AI Wins at Spotting Fake Photos, Humans Win at Videos
A new University of Florida study reveals the detection split: AI crushes deepfake photos, but humans outperform machines at spotting fake videos.
Both Sides Are Right. Both Sides Are Lying. Welcome to Information Warfare.
When two superpowers accuse each other of exactly the same thing — and both have evidence — someone's lying. Or everyone is. This is the defining pattern of the decade.
Proving a Photo Is Real Is Now Harder Than Faking One
Samsung, Google, and Spotify are building receipts for reality. Here's why proving something is human-made just became the internet's hardest problem.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.