Proving a Photo Is Real Is Now Harder Than Faking One
Samsung, Google, and Spotify are building receipts for reality. Here's why proving something is human-made just became the internet's hardest problem.
Generating a fake photo of anyone doing anything now takes about 12 seconds. Proving a real photo wasn't faked? That's the part nobody's figured out yet.
This is the problem Samsung tried to solve two days ago when it announced it would embed cryptographic "Content Credentials" into every photo taken on a Galaxy phone. Google's Pixel 10 already does it. Sony's pro cameras do it. Spotify now sorts uploads into three categories: human-created, AI-assisted, and fully AI-generated.
Something shifted. The biggest companies in tech aren't racing to make better AI anymore. They're racing to prove things aren't AI.
The coin-flip problem
Here's the number that explains everything: humans detect deepfakes at 55% accuracy.
That comes from a meta-analysis of 56 studies published in late 2024. Fifty-five percent. Barely better than guessing. For video deepfakes specifically, accuracy was roughly 50-50 on 11 out of 16 test clips. Your instincts are useless.
And that was before the latest generation of tools. Detected deepfake cases surged from 500,000 to 8 million between 2023 and 2025 — a 900% jump. A Europol report warned that up to 90% of online content could be synthetically generated by 2026. Whether that exact number lands or not, the direction is clear. The internet is filling up with stuff that was never real, and your brain can't tell the difference.
The old approach — training people to "spot the fakes" — is dead. Media literacy classes still teach students to look for weird fingers and odd reflections. Those artifacts vanished two generations of AI models ago.
Receipts for reality
So if you can't detect fakes, what do you do? You prove originals.
That's the idea behind C2PA — the Coalition for Content Provenance and Authenticity. It's an open standard backed by Adobe, Microsoft, Google, Intel, the BBC, and about 6,000 other organizations through the Content Authenticity Initiative. Instead of asking "is this fake?", it asks "can you prove this is real?"
Here's how it works. When you take a photo on a C2PA-enabled device, the camera embeds a cryptographic signature at the moment of capture. That signature records the device, the time, the location, and whether any edits were made afterward. The metadata travels with the image. Anyone can verify it.
Google shipped this on the Pixel 10 in September 2025. Samsung announced it for Galaxy devices on February 28, 2026. Sony's PXW-Z300 broadcast camera already supports it. DigiCert is selling C2PA integration kits to camera manufacturers.
The Content Authenticity Initiative just crossed 6,000 members and five years old. Its founder wrote in a recent blog post that 2026 would be "uniquely important" — the year content provenance moves from standard to reality.
The "human-made" premium
Meanwhile, something interesting is happening in advertising.
Heineken ran a campaign bragging that its ads were made by humans. Polaroid did the same. Cadbury joined in. "Human-made" is becoming a selling point — the organic food label of the attention economy.
Spotify's version is quieter but arguably bigger. The platform now categorizes every upload as human-created, AI-assisted, or fully AI-generated. It scans for synthetic vocals resembling living artists. Tracks trained on copyrighted content get pulled. The message to musicians: prove you're real, or risk getting sorted into the AI pile.
This isn't sentimentality. It's economics. When everything can be generated, the scarce thing — the thing with value — is proof that a human actually made it. Imperfection becomes a feature. Effort becomes a brand.
The gap nobody's talking about
But here's the catch. Content Credentials only work if the whole chain participates.
You take a verified photo on your Pixel. You upload it to Twitter. Twitter strips the metadata. The photo is now unverified again. Most social platforms don't preserve C2PA data. Most messaging apps don't either. The receipt exists at creation and disappears the moment the photo enters the ecosystem where it matters most.
There's a second problem. Content Credentials prove a photo is real. They don't prove an unsigned photo is fake. If someone takes a screenshot, edits it in an app that doesn't support C2PA, and reposts it — there's no credential. That doesn't mean it's fake. It means the system can't tell you anything about it.
So we're building a world where verified content carries a receipt and everything else is... ambiguous. That's better than nothing. But it also creates a two-tier internet: trusted content from expensive devices that support the standard, and everything else from everyone else.
The race against apathy
The real threat isn't that people will believe deepfakes. It's that they'll stop believing anything.
Researchers call it the "liar's dividend." When fakes are everywhere, anyone can dismiss real evidence as AI-generated. A politician caught on camera says it's a deepfake. A company denies a leaked document. A government claims footage of atrocities was fabricated. The existence of deepfakes doesn't just create false trust — it destroys real trust.
This is already happening during the Iran conflict coverage this week. Deepfakes of events in Venezuela and Minneapolis spread faster than verification could follow. It's not that people believed the fakes. It's that some people stopped believing the real footage.
Content provenance is one answer. But it only works if adoption outpaces apathy. If people give up on knowing what's real before the receipts become universal, the technology arrives too late.
What happens next
The pieces are moving fast. Samsung and Google cover roughly 70% of the global smartphone market between them. If both ship C2PA as default within the next year, billions of photos will carry verification metadata from day one.
The missing piece is platforms. Until Instagram, X, YouTube, TikTok, and WhatsApp preserve and display Content Credentials, the chain breaks at distribution. Some are moving — LinkedIn shows C2PA data, and the BBC verifies it. But the platforms where most people actually consume content haven't committed.
The next 12 months will decide whether "is this real?" becomes an answerable question or a permanent shrug. The technology exists. The standard is open. The phones are shipping.
The question is whether the platforms where we actually live will bother to use it.
Keep Reading
The Media Literacy Vaccine That Backfires
UNESCO research shows deepfake exposure increases gullibility, not skepticism. Every 'spot the fake' quiz might be making the problem worse.
Australia Banned Kids From Social Media. The Kids Are Winning.
Two months into the world's first under-16 social media ban, 90% of teens say they never lost access. Six countries are copying the homework anyway.
Your AI Isn't a Tool. It's Your Companion. Here's What That Actually Means.
The future of AI isn't automation — it's companionship. Humans and AI working together as partners to understand the world, break filter bubbles, and improve civilisation.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.