Deepfakes & AI Awareness: Understanding the Threat and Staying Safe

Deepfakes & AI Awareness: Understanding the Threat and Staying Safe

  • vInsights
  • September 19, 2025
  • 16 minutes

Deepfakes & Being AI-Aware: What You Need to Know

Artificial intelligence has made possible some incredible things in image, audio, video: digital effects in movies; cool face filters; virtual characters. But along with that come deeper risks. Deepfakes—media that imitate real people saying or doing things they didn’t—are increasingly used in harmful ways. Understanding them is essential today.

What Are Deepfakes & Why the Concern

Deepfakes are media content (videos, images, audio) where AI is used to recreate or modify voices, faces, expressions, or gestures so they look very close to real. They often rely on large datasets of photos/videos/audio of a person, neural networks, face-mapping, etc. The BDO article highlights that while some uses are creative and entertaining, many uses are deceptive, risking identity theft, misinformation, reputation damage, and even national security threats. 

Some of the main risks:

  • Misleading or false information spread via videos that people believe.

  • Impersonation or identity theft—someone pretending to be someone else.

  • Damage to personal or public reputation if fake content is widely shared.

  • Security risks: false claims, manipulated media in political or institutional contexts. BDO emphasises how deepfake content can be used in social engineering, fraud, etc. 

What’s Happening in India

India has started seeing clear cases of deepfake misuse (non-sexual), legal responses, and growing awareness. Some examples:

  • A morph video involving a popular actress (Rashmika Mandanna) in late 2023 led to FIRs under Indian law when the actress’ face was morphed improperly.

  • A case involving a prominent actor (Anil Kapoor) where AI, face-morphing and other deepfake-type technology was used to misuse his image, voice, persona. The court granted injunctions preventing further misuse of his identity.

  • More recently, a person was arrested for sharing a deepfake video of the Prime Minister in a WhatsApp group; visuals and audio were altered to spread fear/misinformation. 

These show that misuse is not just hypothetical—it is happening. Also, there is legal action and courts are being asked to protect personality rights, image rights, and stop the spread of manipulated content. 

Deepfakes: Clear and present danger

How Deepfakes Are Made (Simplified)

To understand how they can be detected or guarded against, it helps to know how the technology works at a high level:

  1. Data collection: Images, videos, audio of a person are gathered. The more data, the better the mimicry.

  2. Training models: AI models (e.g. using neural networks, face recognition, generative techniques) are trained to replicate facial features, expressions, voice inflections etc.

  3. Generation of fake content: The model is used to create manipulated media—face swaps, synthetic voices, edited or entirely generated content.

  4. Refinement: With new tools and feedback, artifacts (faulty edges, lighting mismatches etc.) are reduced, making detection harder. BDO notes the acceleration in sophistication. 

Challenges to Detecting Deepfakes

  • Many deepfakes now are very high-quality; small details like eye blink, lighting, lip alignment that used to give them away are increasingly convincing.

  • Metadata (information about when/where a file was made) can be stripped or altered.

  • People online often share content without verifying source, context, or authenticity. Emotionally provocative content spreads faster.

  • Legal systems are often catching up; sometimes enforcement is slow, or laws aren’t specific to deepfake tools. India has some laws that are being used, but there is not yet a law explicitly defining deepfakes. 

What BDO Highlights: Key Risks in Entertainment & Beyond

From the BDO article, some of the risks beyond just entertainment misuse:

  • Identity theft: Using someone’s likeness or voice without consent, sometimes for scams. 

  • False endorsements or malicious content: Fake videos/images implying someone said something, endorsed something, or acted in a particular way. Can be damaging especially for public figures or brands. 

  • Social engineering / fraud: Using fake media to deceive individuals or institutions to gain trust, money, or access. 

  • Threats to trust and public discourse: When people can’t tell what’s real, trust erodes—whether in media, politics, social relationships. BDO warns of misinformation, false narratives, etc. 

What Can We Do: Being AI-Aware

Here are some practical steps for individuals, organisations, and society to reduce risk, become more resilient, and promote safe use of AI / deepfake tech.

Stakeholder What To Do
Individuals / Everyday Users • Be careful about what images, videos, voice recordings of you are publicly available. Even a photo uploaded for fun can be repurposed.
• Before sharing media, pause: check source; verify whether similar content appears elsewhere; see if it matches what you know of reality (timing, context).
• Use reverse image searches or tools that detect manipulated media (there are apps/web-tools starting to help).
• Limit permissions: be cautious about apps that ask for camera, microphone, storage access.
• Maintain digital hygiene: strong passwords, update software, avoid suspicious links.
• Educate yourself: learn some of the visual and audio clues of manipulation (e.g., mismatch in lighting/shadows, unnatural lip sync, inconsistent voice inflections).
Organisations / Media / Brands • Adopt verification workflows: check authenticity of content before publishing, especially user-generated or viral content.
• Use or invest in tools for detection and provenance (watermarks, metadata tracking, AI detection tools).
• Establish policies about use of synthetic media. If you have permission to use somebody’s image/voice, record that consent; be transparent if AI/audio/visual modification involved.
• Crisis management plans: if fake content involving your brand/person spreads, respond quickly and clearly; correct the record; communicate with stakeholders.
• Train employees/staff/social media teams to recognise risks, to question sensational content.
Regulators / Governments / Platforms • Clarify legal definitions & rights: identity, image, voice, personality rights in context of AI; ensure laws address deepfake misuse specifically. India has seen some legal cases but lacks explicit deepfake regulation. 
• Mandate transparency: platforms should require reporting & faster takedowns of manipulated media; content labelling when AI/morphing used.
• Promote detection & research: support academic & technical work (datasets in local Indian languages, detection tools relevant to Indian media ecosystems).
• Public awareness campaigns: inform citizens about the nature of deepfakes, what to look out for, how to report.
• Accountability for platform intermediaries: social media, messaging apps should be required to remove harmful deepfake content, especially when it threatens public order, peace, security. India’s IT rules already impose certain obligations on intermediaries. 

Indian Legal Context & What’s Changing

  • Deepfake misuse is increasingly addressed via existing laws: defamation, publications, IT Act, etc. India’s courts have given injunctions based on misuse of persona rights and image rights.

  • Intermediary guidelines: platforms in India are required to remove harmful content, misinformation etc under due complaint and within certain procedural timelines. 

  • Emerging case law: courts are treating identity misuse, image and voice impersonation seriously. As public awareness grows, so does pressure on legal and platform responses.

Why This Matters Now

  • The tools are becoming cheaper, more accessible; even non-experts can generate quite convincing fake content.

  • Content spreads fast over social media, messaging apps; once misinformation or manipulated media has gone viral, damage is hard to undo.

  • For public trust—of media, of governments, of institutions—preserving authenticity is critical.

  • For individuals, reputation, financial safety, privacy are at stake.

Key Takeaways

  1. Deepfakes are not just sci-fi; they are real, now.

  2. Not all misuse is extreme or scandalous, but even small impersonations, false voice recordings, manipulated video frames can cause harm.

  3. Being AI-aware means being cautious, verifying sources, limiting exposure of your data, supporting tools and policies that detect/manufacturer transparency.

  4. Collective action (from individuals, companies, platforms, legal systems) is required to keep this technology from being misused badly.