,

The New Face of Fraud: Deepfakes and the Future of Video KYC

Posted by

In 2023, deepfake attacks against fintechs jumped more than 700%. With more than $43 billion lost worldwide to identity fraud, it is no longer sufficient to pass regulatory checkboxes. For banks and other financial institutions, safeguarding trust in the era of synthetic fraud is about strengthening video KYC through anti-spoofing at the protocol level.

The Double-Edged Sword of Digital Onboarding

India’s digital revolution has unleashed scale and pace across industries—ranging from opening a bank account in minutes to instant loans. Video KYC (Know Your Customer), previously a pandemic-forced solution, is now the core of digital onboarding in banking, NBFCs, and fintechs.

Its promise is true: quicker customer acquisition, lower operational costs, and meeting changing RBI norms. But where adoption is on the rise, so are the dangers. Deepfake-empowered fraud is no longer in theory—it’s reality, and it’s growing exponentially.

In 2023 alone, deepfake-based fraud against financial services grew by more than 700%, tipping from sporadic incidents to systemic attacks.

Deepfakes Have Gone Mainstream—And Cheap

Some years ago, creating a realistic deepfake needed high-end GPUs, technical knowledge, and time. Today, a scammer can create fake identities or impersonate video verifications with tools that cost less than an evening out. 

Imagine the situation now:

  • 2,000+ publicly available face-swap and video manipulation tools
  • 1,000+ programs capable of voice cloning with uncanny realism
  • 50+ bypass kits openly sold in grey markets to circumvent KYC protocols

These aren’t edge cases. They’re easily available, simple to use, and increasing in sophistication. The price of deception has fallen. The risk hasn’t.

Anatomy of a Deepfake-Based KYC Attack

Anatomy of a Deepfake-Based KYC Attack

To grasp the imperative, it’s helpful to look at how these attacks happen:

  • Face Spoofing: Scammers employ AI-created masks or overlays on live video calls to impersonate genuine users.
  • Voice Cloning: They clone the voice of the target with only a few seconds of audio, evading voice-based verifications.
  • Replay Attacks: Ancient video or audio is replayed to deceive systems that do not have active liveness checking.
  • Synthetic Identities: New identities are forged—complete with deepfaked faces, voices, and fake documents.

What is so concerning about these threats is their potential to evade first-gen KYC systems based on static checks or passive video recording.

India’s Financial Ecosystem: Exposed and Underequipped

Even with accelerated digitisation, risk controls in the financial sector have not kept up. Institutions continue to use minimal liveness verification or manual reviewer inspection to detect fraud—proven increasingly ineffective against AI-facilitated attacks.

Major vulnerabilities are:

  • No regulatory mandate to detect or report deepfake-based fraud
  • Lack of shared intelligence across FIs about attack patterns or spoofing tools
  • Limited investment in real-time anti-spoofing infrastructure
  • Rapid scaling of video KYC without proportionate attention to synthetic risks

What this means is that even institutions compliant with RBI’s video KYC norms may still be vulnerable in practice.

From Compliance to Credibility: Why Anti-Spoofing Is Non-Negotiable

To safeguard not only compliance, but credibility, banks need to move beyond checkbox KYC to a multi-layered trust infrastructure.

Which means constructing verification systems that proactively determine if the individual on the other end of the screen is genuine, present, and actual.

Industry-leading defenses are:

  • Active & Passive Liveness Detection: Scanning micro-expressions, light variations, and facial depth to identify spoofing
  • Voice-Video Sync Analysis: Confirming lip movement in sync with voice in real-time
  • AI-Based Anomaly Detection: Detecting unusual face or audio patterns in tens of thousands of sessions
  • Tamper-Proof Audit Trails: Recording device fingerprints, geolocation, and time-stamps on each session
  • Synthetic Identity Recognition: Alerting personas constructed from generative AI data sets

These features can’t be retrofitted—they need to be baked into the protocol.

What Gridlines by OnGrid Is Doing Differently

OnGrid’s Smart Video KYC solution is designed with this future-threat model in mind. It does more than record video calls. It proactively questions the identity of the individual being onboarded through:

  • Real-time facial movement analysis
  • Voice-video synchronization validation
  • AI-driven biometric anomaly tracking
  • Device and network fingerprinting
  • Geo-located time-stamped logs for forensic examination

All these defenses operate in the background, catching fraud before a fictitious identity slips through.

Where Policy Needs to Catch Up

In spite of the alarming increase in deepfake-led fraud, India’s regulatory apparatus still remains unclear on synthetic threats.

What is lacking:

  • Official categorization of deepfake/synthetic identity fraud
  • Forced reporting of KYC failures due to spoofing
  • Joint sectoral intelligence on upcoming spoofing toolkits
  • Anti-spoofing requirements as part of RBI’s KYC and onboarding guidelines

Without these, financial institutions are at an asymmetric disadvantage—fraudsters innovate faster than compliance structures. 

Building Trust in an Age of Digital Deception

The 2023 lesson is explicit: compliance is insufficient. In an era in which AI can fake a face, forge a voice, and create a person, financial institutions need to redefine what it means to understand their customer.

Video KYC is not a fad. But its future—and its security—is up to the decisions we make now. The decision to invest in detection, to build defenses in, and to create systems that are not merely compliant—but resilient.

Leave a Reply

Your email address will not be published. Required fields are marked *