Table of Contents

Censoring the technologies of free expression

Image generated by Stable Diffusion with the prompt "freedom of speech for robots"

Image generated by Stable Diffusion with the prompt "freedom of speech for robots"

What if you could generate a photorealistic picture of anything you could imagine? It would have sounded like wizardry even 10 years ago, but it’s the reality of AI programs like DALL-E 2 and Stable Diffusion. Unsurprisingly, this powerful technology is raising fears about everything from political misinformation to defamatory images to political deepfakes. 

Against this backdrop, a Washington Post article from the end of September bears dire warnings about the dangers of this new technology available to the public, along with calls for new legislation to curb the harmful use of such technology — that is to say, to censor it. That article contains no mention of the First Amendment, or even of freedom of speech interests more broadly, so we’d like to offer that perspective.

While these AI programs are new, the problems they present aren’t new to our system of free expression. Fear and calls for censorship have always accompanied new advances in communications technologies. It is important that we should not be so arrogant as to assume that we know how any new technology will play out. As bad as we are at predicting the possible risks of technology, we are certainly worse at imagining the ways they might transform our world for the better. 

Some rightly feared the ability of the printing press to spread falsehoods across the world. Those harms were real, but who among the government and church censors of the time could imagine the extent of scientific and cultural advancement, let alone the democratization of knowledge, that followed its creation? 

The First Amendment and its associated caselaw represent decades of wrestling with and balancing harms caused by speech, and it offers us some important guidance, lest we stifle a valuable and potentially paradigm-shifting technology.

To analyze these AI programs from a free speech perspective, we must first consider whether the First Amendment applies to the speech, and then whether or not the First Amendment protects the speech from government regulation. While some try to use the computer-generated aspects to claim that human speech rights are not implicated, communication technologies necessarily implicate the First Amendment rights of humans in four ways.

The inseparable relationship between speech and the technology that amplifies it is embodied in the text of the First Amendment itself

Firstly, computer code (like the alphabet) is speech related, so the AI program implicates the First Amendment rights of its programmers. Then, the user who enters a prompt and generates the image is engaging in expression. Those who go on to share the resulting image are conveying their own expression. Finally, the audience that wishes to see the image is implicated in its right to hear and receive information. To say that speech rights aren’t involved here because a computer is generating content would be as ridiculous as saying Daft Punk or some other electronic musician don’t enjoy the same free speech rights as other musicians because they use computers to synthesize music.

The question then becomes whether or not that speech is protected by the First Amendment.

In the United States, speech is presumptively protected by the First Amendment, and thus cannot be legislated against, unless it falls into a category of unprotected speech as defined by the Supreme Court. Examples of unprotected categories of speech include things like defamation, true threats, or incitement to imminent lawless action. 

It’s not hard to imagine that an AI-generated image might sometimes fit into a category of unprotected speech — consider someone posting a generated image of you breaking into their car. But there is already a legal remedy for this harm — a defamation lawsuit.

Anyone with time and YouTube has the tools to learn how to make a similarly defamatory image in Photoshop. The problem is the same in principle as the one faced since the first photos were doctored in the 19th century. The difference with AI programs is only the level of effort required.

Technology is the handmaiden of communication. The inseparable relationship between speech and the technology that amplifies it is embodied in the text of the First Amendment itself, where speech and the press (referring to the printing press) sit in the very same clause: “Congress shall make no law [...] abridging the freedom of speech, or of the press[.]” Because of this tight relationship, regulations restricting access and use of communication technology threaten our very right to speech.

So what’s to be done about the potential harms of such new technologies?

We should not let fear of harm overpower our imagination for the possibilities of what may be the next printing press or internet.

As AI technology becomes more sophisticated and ubiquitous, we should think carefully about the ramifications of government regulations and censorship on this new technology. It’s possible that the solution to the problem of technology is to be found in the technology itself. Just as one possibility, an AI program could be trained to quickly identify altered or AI-generated images to combat misinformation or alert people when their likeness has been used in potentially defamatory generated images. 

It is barely an exaggeration to say that AI technology like DALL-E, Stable Diffusion, and their inevitable successors are the stuff of science fiction — a means to turn imagination into reality. We should not let fear of harm overpower our imagination for the possibilities of what may be the next printing press or internet. Imposing a regime of censorship on this nascent technology may be akin to smothering it in the crib. 


Ryne Weiss is Research Programs Coordinator at the Foundation for Individual Rights and Expression (FIRE).

Ronald Collins is a retired law professor, FIRE Fellow, and coauthor (with David Skover) of “Robotica: Speech Rights and Artificial Intelligence” (2018).

Recent Articles

FIRE’s award-winning Newsdesk covers the free speech news you need to stay informed.

Share