YouTube is expanding its AI deepfake detection tools to protect politicians and journalists, but some experts warn that detection alone may never keep up with the rapid evolution of generative AI
Experts say cryptographic verification may be a more effective long-term solution than trying to detect manipulated content after it spreads
As generative artificial intelligence becomes more powerful, online platforms are facing an escalating challenge: how to detect and prevent convincing deepfake content before it spreads widely.
YouTube recently expanded its AI-powered deepfake detection systems to include additional protections for journalists and public officials. The move is part of a broader effort by major technology platforms to address the growing risks associated with synthetic media, including manipulated video, audio, and images designed to mislead audiences.
But some experts argue that relying primarily on detection tools may not be enough.
Steffan Deiss, CEO and co-founder of technology firm The Hashgraph Group, believes the current strategy risks becoming a never-ending arms race between generative AI systems and the tools designed to identify them.
“Deepfake detection is turning into a catch-up race Big Tech can’t win,” Deiss explains. “The AI generating fake images, video and audio is improving faster than the systems trying to catch it, and every new detection tool effectively teaches the next wave of deepfakes how to evade it.”
As generative AI models improve their ability to produce realistic media, detection systems must constantly adapt to identify new manipulation techniques. By the time platforms flag or remove misleading content, the material may already have been widely shared.
The Limits of Detection
Deepfakes have evolved rapidly over the past several years, driven by advances in generative AI systems capable of producing highly realistic video and audio.
According to research from the cybersecurity firm DeepMedia, the number of deepfake videos circulating online has grown dramatically in recent years, with synthetic media increasingly used in scams, political misinformation, and identity fraud.
Technology platforms have responded by developing AI-based detection systems designed to identify manipulated media. These tools analyze subtle inconsistencies in images, facial movements, lighting patterns, or audio signals that may reveal artificial generation.
However, as the technology improves, these indicators are becoming harder to detect.
Verifying What Is Real
Deiss argues that the industry may ultimately need to shift its focus from detecting fake content to verifying authentic content from the moment it is created.
Cryptographic verification systems could allow digital media to carry a secure record of its origin and editing history, enabling platforms and viewers to confirm whether a piece of content has been altered.
“The real solution isn’t chasing fakes after they spread — it’s proving what’s real from the start,” Deiss says.
Cryptographic technologies already underpin many parts of the internet, from secure messaging to digital identity systems. Similar techniques could potentially be applied to media content, embedding proof of authenticity directly into files as they are created.
Distributed ledger technologies, for example, can record tamper-resistant metadata about digital content, allowing platforms to verify whether a video or image has been modified since its original creation.
The Next Phase of the Deepfake Challenge
As synthetic media becomes easier to produce, the debate over how best to manage deepfakes is likely to intensify.
Some experts believe detection systems will remain essential tools, while others argue that authentication frameworks will ultimately play a larger role in protecting information ecosystems.
For platforms such as YouTube, the challenge is balancing the speed of AI innovation with safeguards that protect users from manipulated content.
Whether through improved detection, cryptographic verification, or a combination of both, the race to maintain trust in digital media is only just beginning.
