What's New :
UPSC Interview Guidance Program. Register here...

Ethics of Deepfake and Synthetic Media

  • Category
  • Published
    26th Jul, 2021

The documentary “Roadrunner: A Film About Anthony Bourdain” has been recently released directed by Morgan Neville. To craft the film’s narrative, Neville drew on tens of thousands of hours of video footage and audio archives—and, for 3 particular lines detected within the film, Neville commissioned an IT-software company to make an A.I. generated version of Bourdain’s voice.


The documentary “Roadrunner: A Film About Anthony Bourdain” has been recently released directed by Morgan Neville. To craft the film’s narrative, Neville drew on tens of thousands of hours of video footage and audio archives—and, for 3 particular lines detected within the film, Neville commissioned an IT-software company to make an A.I. generated version of Bourdain’s voice.


  • United Kingdom sports celebrity David Beckham was featured in a not for profit video for the Malaria No More.
  • In the minute-long video, Beckham had made a fundraising plea in nine totally different languages.
  • His mouth movements and facial gestures were manipulated as if he were speaking with multiple voices
  • Such technology is termed as ‘deepfake’ and covered below the broader class of ‘synthetic media’, which prima facie seems natural and real but created using high end processing tools of machine learning and artificial intelligence.
  • A deepfake/synthetic media audio-video with an intent to deceive, intimidate, misattribute, and inflict reputational harm throws many ethical challenges to the society


What is deepfake or synthetic media?

  • Deepfakes are digital falsifications of pictures, videos, and audios created exploiting an editing process that is automated through AI techniques
  • Synthetic media content is different from “deep fakes,” these are “people” created by AI algorithms that manipulate faces, speech patterns, tones, and other data to produce a human entity.

What are the ethical issues encircling deepfake/synthetic media?

Threat to Individuals

  • The terribly first use case of malicious use of deepfake was seen in inflicting emotional, reputational, and in some cases, violence towards the individual, chiefly women.
  • Pornographic deepfakes can threaten, intimidate, and inflict psychological harm on an individual
  • Deepfake can depict a person indulging in antisocial behaviors and saying vile things that they never did. Those deepfakes can have severe implications on their reputation.

Threat to Society

  • AI-based synthetic media may accelerate the already declining trust in media. Such erosion can contribute to a culture of factual relativism, fraying the increasingly strained fabrics of civil society.
  • False information about the institutions, policy, and public leaders powered by a deepfake can be exploited to spin information and manipulate belief
  • Deepfakes can be used to exacerbate social division by using fake video and audio to spread disinformation about a community. There are a few examples from South Asia, in Myanmar and in India against Muslims.

Threat to Businesses

  • Deepfakes are used to impersonate identities of business leaders and executives to facilitate fraud.
  • Deepfakes could pose unique labor and employment risks. Employees are relying increasingly on secret video and audio recordings to support their claims of harassment or mistreatment
  • In such way, businesses not only stand to lose the value of defrauded funds, reputational goodwill, but they can also be subject to litigation by shareholders, investigations by regulators, and loss of access to further capital.

Threat to Democracy

  • A well-executed deepfake, a few days before the polling, of a leading political candidate spewing out racial epithets or indulge in an unethical act, can damage their campaign
  • Deepfake can have an impact on the voters and the candidate.
  • Deepfakes may also be used for misattributions, telling a lie about a candidate, or falsely amplifying their contribution, or inflicting reputational harm to a candidate.

Deepfake/Synthetic Media for the good


  • Artificial intelligence can build tools to hear, see, and soon with Artificial General Intelligence (AGI), reason with increasing accuracy. AI-Generated Synthetic media can help make the accessibility tools smarter and, in some cases, even affordable and personalizable, which can help people augment their agency and gain independence. Synthetic media can bring accessible solutions to all.


  • Deepfake technology facilitates numerous possibilities in the education domain. Schools and teachers have been using media, audio, video in the classroom for quite some time. AI-Generated synthetic media can bring historical figures back to life for a more engaging and interactive classroom.  Deepfakes can help an educator to deliver innovative lessons that are far more engaging than traditional visual and media formats.


  • Deepfakes can be a great tool to realistically realize the primary tenant of reflection, stretching, contortion, and appropriation of real events in comedy or parody. AI-Generated synthetic media can bring phenomenal opportunities in the entertainment industry, and we see a lot of realization of the opportunity by independent creators or YouTube.

Autonomy & Expression

  • Synthetic media can help human rights activists and journalists to remain anonymous in dictatorial and oppressive regimes. Using technology to report out atrocities on traditional or social media can be very empowering for citizen journalists and activists. Deepfake can be used to anonymize voice and faces to protect their privacy.

Reach and Message Amplification

  • Synthesia, the company behind the Beckham video, and VOCALiD, a voice startup, created tools to natively localize video and audio content for learning tools, brand marketing, audience engagement, customer service, and public messaging to broaden the reach and amplification of the message.

Public Safety & Digital reconstruction

  • Reconstructing the crime scene is a forensic science and art, using inductive and deductive reasoning and evidence. AI-Generated synthetic media can help reconstruct the scene with the interrelationship of spatial and temporal artifacts

Way forward

3Cs can be an optimum solution to meet the present day challenges our society is faced with.


  • No-one should be synthesised without consent. We propose implementing a digital consent system to streamline the process.
  • For example, synthesising public figures without consent violates these ethical guidelines no matter the intention of the content.


  • Actors should be in control of their likeness and should have access to a record of all synthetic media content they appear in.


  • A general willingness, within reasonable means, to engage in public discourse and education around synthetic media.

Verifying, please be patient.

Our Centers

DELHI (Karol Bagh)

GS SCORE, 1B, Second Floor, Pusa Road, Karol Bagh, New Delhi - 110005 (Beside Karol Bagh Metro Station Gate No. 8)

Get directions on Google Maps

BHUBANESWAR (Jaydev Vihar)

GS SCORE, Plot No.2298, Jaydev Vihar Square, Near HCG Day Care, BBSR - 751013

Get directions on Google Maps

LUCKNOW (Aliganj)

GS SCORE, 2nd Floor, B-33, Sangam Chauraha, Sector H, Aliganj, Lucknow, UP - 226024

Get directions on Google Maps

Enquire Now