Leading ladies or digital duplicates?
This article is authored by Sanhita Chauriha, technology lawyer, Vidhi Centre for Legal Policy, New Delhi.
The emergence of deepfake videos, particularly those involving public figures like Katrina Kaif and Rashmika Mandanna, has triggered a discussion on the growing risks associated with identity theft in the digital age.
Deepfakes are a type of synthetic media, typically in the form of videos, that are created using Artificial Intelligence (AI) and deep learning technologies. These AI algorithms manipulate and superimpose images, audio, or video onto existing video footage to make it appear that someone is saying or doing something they never did. The term “deepfake” is derived from “deep learning” and “fake.”
Deepfake technology has gained notoriety for its ability to generate highly convincing, yet entirely fabricated, content. While deepfakes have legitimate uses in fields like entertainment and computer graphics, they have also raised significant concerns due to their potential for misuse.
In this context, it becomes imperative to explore the subtle yet discernible signs that may indicate the presence of these sophisticated AI-generated forgeries.
· Inconsistencies in facial features: A key indicator of a deepfake is often visible in the inconsistencies within the subject’s facial features. These irregularities can include unnatural skin tones, disproportionate facial expressions, or subtle misalignments in the eyes, nose, and mouth.
· Imperfections around the edges: Deepfake videos can sometimes exhibit imperfections along the edges of the subject’s face, hair, or body. These anomalies can manifest as blurry outlines, artifacts, or distortions in the surroundings.
· Aberrations in voice and speech patterns: While deepfake technology predominantly focuses on visual manipulation, some creators pair these fabricated visuals with voice impersonation. Detecting discrepancies in speech patterns, accents, or audio quality can be an additional means of spotting a fake.
· Unusual backgrounds and lighting: Deepfake videos may show discrepancies in background scenery, lighting conditions, or shadows that are inconsistent with the video’s purported context. Observing anomalies in the surroundings can be a valuable clue.
· Incongruities in context and behaviour: One of the subtler signs of a deepfake is a misalignment of context and behaviour. This includes unusual reactions, out-of-character gestures, or actions that do not resonate with the subject’s known personality or behaviour.
While these are the signs to understand the presence of deepfakes, it is imperative to understand the legal challenges that they pose. Addressing the legal intricacies of deepfakes is crucial for protecting individual’s rights, and maintaining trust in digital media. Legal arguments related to deepfakes are often centred around issues of privacy, intellectual property, defamation, and cybercrimes. Here are some key legal arguments associated with deep fakes:
· Privacy violation: Deepfakes that involve the unauthorised use of a person’s likeness can be considered a violation of their right to privacy. This argument emphasises that individuals have the right to control how their image and likeness are used, and deepfake creation without consent infringes on this right.
· Intellectual property infringement: If a deepfake uses copyrighted material, such as a person’s likeness or voice, without the owner’s permission, it can be seen as intellectual property infringement. Creators of deepfakes may be liable for copyright violations.
· Defamation and false light: Deepfakes that portray individuals engaging in false, defamatory, or misleading activities may give rise to defamation or false light claims. If the fabricated content harms an individual’s reputation, legal action may be taken against the creator.
· Identity theft and fraud: Creating deepfakes for fraudulent purposes, such as financial scams, impersonation, or identity theft, may lead to criminal charges, including identity theft, fraud, or wire fraud, depending on the jurisdiction.
· Cyberbullying and harassment: Deepfakes used to harass, intimidate, or harm individuals can result in legal action under anti-cyberbullying or harassment laws. Such actions can lead to civil and criminal consequences.
· Right of publicity: The right of publicity protects an individual’s right to control the commercial use of their likeness. Deepfakes that exploit a person’s image for commercial purposes without consent can be subject to legal action.
· Fraudulent misrepresentation: Deepfakes that deceive individuals or organisations by impersonating someone for fraud may be viewed as fraudulent misrepresentation, which can lead to legal consequences.
· Cybersecurity and unauthorised access: The act of creating or disseminating deepfakes may involve unauthorised access to personal or corporate data, which could result in legal charges related to cybersecurity breaches.
· Content distribution and social media liability: Social media platforms and content-sharing websites may face legal challenges related to deepfake content that is distributed or hosted on their platforms. Liability may be based on the failure to remove harmful or misleading deepfakes promptly.
· Consent and release agreements: Legal arguments may revolve around consent and release agreements. If individuals involved in deepfake content creation did not provide informed consent or violated release agreements, this may form the basis for legal disputes.
These AI-generated forgeries raise critical issues, from privacy and intellectual property rights to defamation, fraud, and cybercrimes. As the use of deepfakes becomes more prevalent, it becomes increasingly important for individuals, legal authorities, and technology companies to navigate the complex web of challenges they present. Moreover, the determination of liability and the extent of legal consequences often depend on jurisdiction-specific laws and the unique circumstances surrounding each case. With the potential for misuse and harm, addressing the legal implications of deepfakes remains an ongoing and dynamic challenge that requires a vigilant legal and regulatory framework to protect individuals and society at large.
This article is authored by Sanhita Chauriha, technology lawyer, Vidhi Centre for Legal Policy, New Delhi.