The Deepfake Double Standard: First Amendment Show Down Looms
Why Texas's New Deepfake Law Could Reshape Digital Rights
Texas became the first state to make AI-generated fake pornography a crime in 2024. This groundbreaking law responds to the rapid rise of generative AI tools that let anyone create realistic but fake videos and photos of real people. Under the Texas law, creating or sharing explicit deepfake content without permission can lead to jail time and heavy fines.
Texas Penal Code Section 21.165, titled "Unlawful Production or Distribution of Certain Sexually Explicit Videos," criminalizes the creation and distribution of deepfake pornography without the depicted individual's consent. The statute does not specifically reference pornographic actors or distinguish between public figures and private individuals. Instead, it broadly prohibits the unauthorized use of any person's likeness in sexually explicit deepfake content.
The rationale for this inclusive approach is to protect all individuals from the harms associated with non-consensual deepfake pornography, regardless of their profession or public status. By encompassing both public figures, such as pornographic actors, and private individuals, the law aims to:
Safeguard Personal Privacy: Prevent unauthorized exploitation of an individual's likeness in explicit content.
Prevent Emotional and Reputational Harm: Mitigate the distress and potential damage to personal and professional reputations caused by the dissemination of such material.
Deter Malicious Use of Technology: Discourage the misuse of artificial intelligence to create deceptive and harmful content without consent.
It should be noted that Texas does not criminalize malicious use of AI to generate depictions of non-pornographic actors.
The Broader Ethical Question
A deepfake combines "deep learning" artificial intelligence with "fake" content - the AI analyzes real photos or videos of someone to generate new, artificial content that looks remarkably real. While deepfakes started as elaborate tricks in movies and social media, they have become weapons for harassment and fraud. Research shows that 98% of all deepfake videos online contain explicit content, with 99% targeting women without their knowledge or permission. The technology has spread from celebrities to everyday people, including students creating and sharing fake explicit images of classmates.
The Texas Double Standard
If a production studio creates an AI-generated version of Johansson in a feature film that is rated R, the content is arguably protected under the First Amendment as long as it avoids false advertising or infringement on publicity rights. While the production studio would find their names on the Defendant end of a lawsuit, they Texas Rangers would not serve a warrant.
If the same AI likeness of Ms. Johanson stars in a movie premiering on Pornhub, it suddenly falls under Texas’s criminal statute because it’s deemed to violate personal dignity, autonomy, and public decency.
The disparity arises because society (and often courts) treat sexually explicit material differently, even when it would otherwise fall under the umbrella of protected speech. This prudishness often leads to legal witch hunts fueled by laws reflecting conservative views on sexuality, which prioritize protecting individuals from sexualized depictions over other types of misuse.
It also stifles creative expression because there exists a government endorsed bias toward "respectable" forms of expression, like feature films or art, versus "lower" forms like pornography.
Back in the Day
The first amendment used to mean something. Surprisingly, the Supreme Court has been fiercely protective of porn provided it does not meet the legal standard for obscenity established in Miller v. California (1973). The Miller test defines obscenity as material that appeals to the prurient interest (whatever that is), and lacks serious literary, artistic, political, or scientific value (a national standard).
The Court’s rationale has been to protect expression even when society finds it distasteful. Cases like Ashcroft v. Free Speech Coalition (2002), which invalidated a law banning virtual child pornography, demonstrate the Court's unwillingness to criminalize protected speech simply because it offends sensibilities.
Texas’s deepfake law, if challenged, would likely face scrutiny for its selective targeting of sexually explicit material. Critics could argue that:
It unfairly criminalizes speech based on content (sexuality).
It imposes a viewpoint-driven restriction, favoring "respectable" uses of AI while penalizing pornographic uses.
It ignores the First Amendment’s strong protections for adult content established in cases like Miller and Ashcroft.
Bottom Line
The Texas law reflects cultural attitudes toward sexuality more than a coherent legal framework for deepfake technology. By criminalizing sexually explicit deepfakes but not other AI-generated misuses of likenesses, it risks overstepping First Amendment protections. This selective morality could face strong constitutional challenges, especially if high-profile cases push the issue into the federal courts. Until then, these laws function as a patchwork of moral policing, shaped as much by cultural bias as by legal precedent.