Table of Contents
- 1 What Are Public Figures’ and Artists’ Likenesses?
- 2 The Real Costs of AI Likeness Misuse for Public Figures and Artists
- 3 Protecting Public Figures and Artists from AI Exploitation
- 4 Why Licensing Likeness Is Essential for Control and Compensation
- 5 How Public Figures Are Collaborating With AI on Their Own Terms
- 6 The Path Forward In Protecting Creativity in the AI Era
- 7 FAQs
Artificial intelligence is changing how identities are copied, shared, and monetized. In early 2025 alone, celebrities were targeted 47 times by AI generated impersonations, an 81 percent increase compared to all of 2024. This rise highlights how unregulated likeness use is affecting entertainment and weakening trust between public figures and their audiences.
These incidents include voice clones used in fake advertisements, unauthorized AI videos, and synthetic appearances that spread across platforms faster than they can be removed. Today, convincing replicas can be produced using only a small set of images and commonly available AI tools.
In a previous article, How AI Is Affecting the Music Industry, we examined how AI is reshaping music creation. Similar patterns are now appearing across other parts of the creative world. From realistic avatars to convincing deepfake videos, AI systems can now replicate recognizable individuals at scale.
In this context, protecting likeness means preventing unauthorized replication of a person’s face, voice, or expressive identity while maintaining control over how it is used commercially and creatively.
Protecting likeness is now essential for public figures and artists. It helps preserve reputation, maintain creative control, and reduce long term damage from misuse. This article explores why proactive protection matters and how identity can be safeguarded in an AI driven media environment.
What Are Public Figures’ and Artists’ Likenesses?
In the past, a person’s likeness was commonly associated with their face, name, or signature. Today, it includes a broader set of identifiable traits. Visual appearance, voice, and expressive characteristics can be captured, analyzed, and reproduced using AI systems trained on publicly available material.
For public figures and artists, this expanded definition matters because likeness is no longer limited to static images or past recordings. A short interview clip, combined with a small set of photos, can be enough to generate a video that places a person in an entirely new context. These AI generated replicas may appear in advertisements, promotional campaigns, or social media posts that imply participation or endorsement.
In this environment, likeness refers to the combination of visual, auditory, and expressive traits that audiences associate with a specific individual. It reflects how a person is recognized across media, even when that representation is incomplete, altered, or created without authorization.
The Real Costs of AI Likeness Misuse for Public Figures and Artists
As AI generated replicas become easier to produce, misuse has shifted from isolated viral incidents to a recurring issue across entertainment and digital media. The AI image generation market continues to grow, with the sector projected to reach $1.08 billion by 2030, up from approximately $349.6 million in 2023, according to Grand View Research. As access expands, so does the likelihood that likeness will be replicated without consent.
Below are some of the most common ways public figures’ and artists’ likenesses are misused today:
1. Fake Endorsements and Eroded Credibility
Unauthorized advertisements featuring public figures have become increasingly common. Tom Hanks publicly warned followers after an AI generated ad promoted a dental plan using a digitally altered version of him, stating that he had no involvement. Scarlett Johansson has faced similar misuse and has spoken publicly about the need for stronger protections against unauthorized AI generated likeness.
These campaigns mislead audiences and damage personal credibility. Once false endorsements circulate, corrections often fail to reach the same audience that saw the original content, leaving lasting confusion behind.
2. AI-Generated Performances Undermining Artists’ Revenue
Synthetic performances have also created new risks for artists. In 2023, AI generated tracks imitating Drake and The Weeknd spread rapidly online and gained millions of streams before being taken down. Even brief exposure can divert attention and income from legitimate releases while weakening the distinctiveness of an artist’s public identity.
3. Misinformation That Spreads Faster Than Verification
AI generated videos and audio clips often move across platforms faster than official responses can contain them. Once reposted and redistributed, these materials are difficult to control. For public figures, this speed can distort perception and create long term reputational harm, even after content is shown to be fabricated.
4. Loss of Creative and Personal Integrity
Beyond revenue and reputation, AI likeness misuse affects how creative work is understood and valued. When a person’s voice or image can be replicated without approval, the boundaries around ownership and representation become less clear. Repeated misuse weakens the connection between creators and audiences and blurs what can be trusted as authentic.
Protecting Public Figures and Artists from AI Exploitation
Preventing AI likeness misuse requires measures that operate before unauthorized content circulates widely. Once a deepfake or voice clone spreads, reputational and commercial harm can be difficult to reverse. Effective protection depends on systems that reduce the likelihood of unauthorized replication and give public figures greater control over how their likeness is used.
Several approaches play a role in limiting misuse:
1. Preventing Misuse at the Source
Protection begins with mechanisms that establish consent and provide clarity around how media is created. Verification tools can help confirm the source of content, while provenance and tracking systems can record when and how synthetic media is generated. Consent based frameworks add another layer by ensuring creators approve how their likeness is accessed and applied before distribution.
Together, these measures shift likeness protection from reactive takedowns to proactive oversight. They help public figures retain control over their identity and reduce the risk of misuse at scale.
2. Platform Responsibilities in Likeness Protection
Platforms and content hosts are often where AI generated likenesses first appear. Requiring proof of consent before synthetic media is uploaded can help limit unauthorized use. Clear labeling of altered or AI generated material also gives audiences context and reduces confusion about authenticity.
Some platforms are beginning to test consent driven models. YouTube’s Dream Track is one example, allowing creators to use participating artists’ voices within defined boundaries, with approval and compensation built into the system. While initiatives like this demonstrate what structured collaboration can look like, enforcement remains inconsistent across platforms.
3. Early Detection and Reporting Systems
Even with preventive controls in place, new AI generated likenesses continue to surface. Detection systems that rely on watermarking, hashing, or provenance signals can help identify suspicious content earlier in its lifecycle. Flagging potential violations before widespread distribution reduces reliance on repeated takedown efforts and limits long term harm.
4. The Role of Regulation
Regulation also shapes how these safeguards are adopted. Laws such as the ELVIS Act and established doctrines like the right of publicity are beginning to clarify expectations around consent and accountability for AI generated voice and likeness. These evolving frameworks influence platform behavior and provide a legal backdrop for prevention, even as standards continue to develop.
Why Licensing Likeness Is Essential for Control and Compensation
Once protective systems are in place, public figures and artists need a clear framework for defining how their likeness can be used when permission is granted. Licensing provides that structure by establishing boundaries around approved use and setting expectations before content is created or distributed.
By clarifying terms in advance, licensing shifts likeness management from reactive enforcement to structured oversight. It ensures that identity use is intentional rather than assumed.
When implemented thoughtfully, licensing supports the following:
1. Control and Representation
Licensing allows public figures to approve or decline specific uses of their likeness before synthetic content is produced. This helps ensure that identity is represented accurately and reduces the risk of misleading appearances.
2. Defined Consent and Accountability
Licensing formalizes consent by outlining how likeness may be accessed, modified, or distributed. Clear agreements make expectations explicit and provide a basis for addressing misuse if boundaries are exceeded.
3. Compensation for Approved Commercial Use
When likeness is used in advertising, entertainment, or other commercial contexts, licensing ensures creators are compensated under agreed conditions. This protects the long term value of personal image and voice while discouraging unauthorized exploitation.
How Public Figures Are Collaborating With AI on Their Own Terms
Licensing does more than set boundaries. When consent and accountability are clearly defined, it also creates space for approved collaboration. Some public figures are choosing to work with AI tools under specific conditions, using licensing to maintain control over how their likeness is accessed and applied.
These examples show how artists and public figures are engaging with AI in ways that preserve consent, oversight, and clarity.
1. Music: Setting Clear Standards for Voice Use
In music, AI voice tools raise direct questions about ownership and reuse. Musician Grimes partnered with TuneCore to launch GrimesAI, a platform that allows creators to generate music using her voice with explicit permission. Use of the voice is governed by defined terms, and approved tracks share royalties with the artist.
This approach matters because it replaces informal scraping with a structured agreement. Instead of voice models being trained or deployed without approval, access is limited to specific uses under clear conditions. For musicians, this model demonstrates how voice rights can be managed without relying on takedowns or after-the-fact enforcement.
2. Voice: Preserving Legacy Through Licensed Agreements
Voice presents a unique challenge because it is closely tied to personal identity and legacy. Actor James Earl Jones, known for voicing characters such as Darth Vader and Mufasa, licensed his voice for use in AI supported projects under agreed terms. The arrangement allows his voice to continue appearing in future productions while remaining aligned with his wishes.
This type of agreement shows how voice licensing can address long term concerns. It provides a way to manage how a voice is used over time, including after an artist steps away from active performance, while maintaining consent and oversight.
3. Film: Using Digital Doubles with Actor Participation
In film and television, digital likeness has been used for years, but AI tools increase the speed and realism of replication. In Tron: Legacy, Jeff Bridges approved the use of a digitally altered version of himself for flashback scenes. The process involved his participation and consent, combining performance capture with visual effects.
This example highlights an important distinction. Digital doubles can be used responsibly when actors are involved in the process and approve how their likeness appears on screen. Without that involvement, similar techniques can easily cross into unauthorized use. Clear agreements and direct participation help set boundaries around acceptable use.
The Path Forward In Protecting Creativity in the AI Era
Protecting creativity in the AI era depends on systems that respect the people behind the work while providing clear signals about how content is created and used. Tools such as watermarking and labeling can help add context, but their value depends on consistent standards and enforcement.
Uncertainty around how AI systems are trained remains part of the challenge. Ongoing debate over fair use and permission affects not only creative works but also personal likeness, especially as voices and identities become easier to reproduce at scale.
Ultimately, protecting digital likeness comes down to accountability and design choices. When platforms and developers prioritize transparency and consent, individuals retain greater control over how they are represented, how their work is used, and how their legacy is preserved.
FAQs
AI likeness misuse refers to the unauthorized replication of a person’s face, voice, or expressive identity using artificial intelligence tools. It often appears as fake endorsements, synthetic performances, or altered media that suggests participation without consent.
Public figures have a large amount of publicly available media, including interviews, performances, and images. Even a limited dataset can be used to generate convincing replicas that spread quickly across digital platforms.
Unauthorized replicas can mislead audiences, erode credibility, and divert attention or revenue away from legitimate work. Even brief circulation of synthetic content can create confusion that outlasts the original post.
Once AI generated content spreads widely, it becomes difficult to fully contain. Proactive systems that establish consent, oversight, and verification before distribution are more effective than relying only on removal after misuse occurs.
Control comes from clear consent frameworks, structured licensing agreements, and early detection systems. These measures help ensure that identity use is intentional, transparent, and aligned with the individual’s approval.