DEEP FAKES TAKING OVER PERSONALITY RIGHTS
- CIIPR RGNUL
- Aug 18
- 7 min read
Ansa Alexander is a 2nd-year student at National University Advanced Legal Studies
Introduction
The impacts of fast changing AI networks are felt in every facet of our lives. In recent decades, the entertainment industry has entered a new phase of growth, significantly influenced by the rising popularity of social media influencers. Today, AI can generate any type of voice, mannerism of any individual. Deep fake technology uses audio-video inputs by applying deep learning algorithms and generates output which very much resembles the personality traits and mannerisms of the persons they are planning to imitate. This new feature of AI networks is bombarding social media platforms, where numerous celebrity figures are becoming victims of convincing deepfake videos as seen in the recent landmark Delhi High Court ruling granting a John Doe injunction to protect Ankur Warikoo from unauthorized AI-generated impersonations falsely depicting him endorsing WhatsApp stock‑tip schemes. The matter is of great importance and concern as the Ministry of Electronics & IT (MeitY) issued an advisory to social media intermediaries to be diligent in identifying deep fake content and mandate reporting within 36 hours of recording.With a long line of companies developing their own technologies for producing deepfakes, the pages circulating these videos are getting millions of views, highlighting the publicity these actions receive.
Celebrity Victimisation and the Rise of Personality Rights
A long line of celebrities from Rashmika Mandanna, Alia Bhatt , Katrina Kaif to Priyanka Chopra have been added in videos where their faces have been morphed and added. In November 2022, legendary actor Amitabh Bachchan approached court seeking remedy from the increasing misuse of his personality rights for commercial gains. In yet another instance, Delhi High Court, in a landmark decision decided in favour of Anil Kapoor, set a precedent by restraining the unauthorised commercial use of the actor’s personality attributes. This decision restrained the misuse of his voice, image and the popular “jhakaas” catchphrase, thereby reinforcing the protection of celebrity rights in the evolving landscape of digital media. When personalities attain celebrity status, they get enhanced recognition and the commercial value attributed to their names. This means they can have commercial gains from the products they promote or the views they get. This renders them vulnerable to AI deep fake technologies as more financial stakes come with it. Hence, an enhanced grade of rights called celebrity rights was granted to the public figures. Along with violating the privacy concerns of these individuals, the deep fake videos are also catching the public’s eyeview. Often, scenes in these Deep fake videos make an effort to popularise a particular political party and hence try to unfairly harness political gains. One example is the fake videos of Bollywood actor Ranveer Singh that circulated during the Lok Sabha Elections, where he is seen as criticising a particular party. The videos thus generated the capacity to go on like wildfire and influence public opinion in a negative manner. The impact of Deepfake is not only limited to celebrity figures but can create ramifications in the economy alike. This is evident from the recent cautionary notices released by the National Stock Exchange and the Bombay Stock Exchange after deep fake videos falsely disseminated where the CEOs of these exchanges were shown as giving stock and investment recommendations.
Legal Framework Protecting Personality Rights in India
With an increasing number of people having access to the internet and AI it’s becoming comparatively easier to use the personality of celebrities to further their interests. In India, personality traits are coming within the aspect of Intellectual Property laws, and it entitles an individual’s right to be left alone and right to protect his or her image from being commercially used. The right to privacy was admitted to India when it was acknowledged in the Puttuswamy judgement. Even though there was no initial recognition of the personality rights of an individual as a trademark, this was acknowledged with the coming of the case ICC Development (International) Ltd v Arvee Enterprises (2006). Personality rights are not explicitly incorporated in the statutes for now. But it does find recognition under Copyrights Act and Trademarks Act indirectly. Additionally, the Supreme Court’s ruling to include Right to Privacy as a fundamental right gives a basis for protecting individuals from any unauthorised use of their persona. Copyright Act 1957 encapsulates provision protecting performers and artists rights and Trademarks Act 1999 safeguards individuals name, signature, and other unique identifiers. As far as a celebrity is considered, it is his personality and celebrity figure that forms the artwork which needs to be protected. Widely used AI tools including ChatGPT and its mother company Open AI, which has now become a household name, crossed paths with Hollywood actress Scarlett Johansson. It was alleged that AI tools used her voice for chatbot, even though she had denied permission for the same in multiple instances. Considering the circumstances, the Court has on many occasions reinstated the personality rights of an individual and has clarified a person’s explicit right and control over the same. Recently the Bombay High Court observed in Arijit Singh v. Codible Ventures LLP that the singer has the rights over his name, images, voice, likeness and all other personality rights. The company was caught in a legal battle with famous singer Arijit Singh as the latter generated the voice of Arijit Singh with the help of AI tools, and his voice was used for the promotion of certain apps, which were done without his permission. On another occasion, the Madras High Court addressed the violation of personality rights of Shivaji Rao aka Rajnikath where his style, demeanour and name was used for a film without any explicit consent of the actor.
Regulatory and Global Responses to Deepfake Technology
Large scale ease in the creation and dissemination of Deepfakes is also posing a significant challenge in the integrity of courtroom evidence. The new criminal laws recognize electronic evidence as documents and broaden the scope of admissibility in court, simultaneously raising concerns over the potential misuse of deepfaked or AI-generated content that may bypass security protocols and Section 65B certification requirements. Delhi High Court in a case Nirmaan Malhotra v. Tushita Kaul, declined to rely on photographs submitted by the husband alleging his wife’s adulterous relationship. The court observed that in an era of deepfakes, it was unclear whether the woman in the images was indeed the respondent, and hence emphasized that such claims must be substantiated with proper evidence before the Family Court. This gives an example of how deepfakes can impact the landscape of evidences in the criminal justice system.They poses challenges to self authenticating evidences making it necessary to develop Challenges for fast-developing Generative AI and its implications are posed to create dilemmas for judicial authorities and investigation authorities. The ability of AI to create content which is comparable to real content makes it hard for anyone to differentiate from one another. Many countries have taken steps to regulate this highly unregulated field. For example, China has rolled out regulations and asked for watermarking the content generated with the help of Generative AI. The country has also prompted its media actors to take steps to prevent the spread of false information. In Europe, steps have been taken by the French National Assembly to propose a bill that aims to protect original authors from unauthorised generative contents of AI tools. The United States has been innovative in this aspect by proposing a draft bill called Nurture Originals, Foster Art, and Keep Entertainment Safe ( NO FAKES ) Act. The aim of these drafts is to ascertain an individual in media a ‘digital replication right’ which is nothing else but an authorisation of the usage of one's likeness in a digital replica. In India, legislations like IT Act 2000, Digital Personal Data Protection Act 2023, and the existing Copyright Act and Trade Marks Act can harmoniously safeguard the personality rights of an individual. Section 66E, 67 and 67A of IT Act penalizes dealing of a person’s private images without consent and criminalizes obscene content online safeguarding against such misuses. Further DPDP Act addresses consent, correction and erasure of individual’s personal data and the Act mandates that any use of personal data requires explicit consent by the owner. Biometric and facial data misused by AI- generative tools without an individual's consent can be brought under these provisions. Further Celebrity figures and influencers can assert IP rights to prevent unauthorized endorsements or brand affiliations generated via AI deepfakes or manipulated media.
The Indian Courts have clarified their stand on protecting the personality traits of various celebrities who have approached the courts. The question of how far the archaic laws can regulate the leap of Artificial Intelligence is what only time can testify.
Conclusion
The evolution of artificial intelligence and deepfake technologies marks a turning point in the interplay between innovation and individual rights. Over the years, AI tools have increasingly replicated human characteristics with remarkable accuracy. This poses unprecedented legal, ethical, and societal challenges, particularly with respect to personality rights. By combining intellectual property principles with privacy jurisprudence, Indian courts have started to proactively recognise and defend these rights. However, statutory silence on personality rights continues to leave gaps in enforcement, which is allowing technological misuse to outpace legal reform. The global shift toward recognising digital replication rights and watermarking obligations is a positive step, and India must not lag behind in codifying clear protections that reflect the complex realities of AI-induced identity manipulation.
Going forward, Indian legal architecture must evolve from piecemeal protections under existing laws to a comprehensive regime which addresses digital impersonation, ensures informed consent for the use of personality traits, and provides swift remedies against their misuses. As courts continue to interpret laws creatively to keep pace with technological growth, legislative action remains critical. Without prompt statutory intervention, the law risks becoming a passive observer rather than an active guardian of individual dignity in the digital age.