Can speech on social media incite violence?
By DANIELLE ALLEN AND RICHARD ASHBY WILSON | Special to The Washington Post | Published: August 9, 2019
Coverage of the El Paso and Dayton, Ohio, shootings has put a word into circulation: incitement. Can speech on a social media site, or a presidential platform, incite violence? It’s time for a primer.
Providing examples dating back to 1594, the Oxford English Dictionary offers this definition: “action of inciting or rousing to action; an urging, spurring, or setting on; instigation, stimulation.”
At a May rally in Florida, President Donald Trump emphatically denounced the arrival of immigrants from Central America as an “invasion.” He asked, “But how do you stop these people?” Someone in the audience shouted that they could be shot. The crowd laughed, and Trump responded sympathetically: “That’s only in the Panhandle you can get away with that statement.” On the social media site 8chan, participants are roused by such statements and praise mass killings of immigrants as “scores” achieved by the killers.
Did the shouter in Florida, or the president, incite illegal action? Did participants on 8chan?
In modern constitutional law, incitement entails three elements and is applicable only when all elements are present.
In the early 20th century, incitement law was broadly defined and suppressed speech that merely undermined respect for the law or state authorities. This was replaced by the clear-and-present-danger test, which was intended to loosen restrictions, but instead was used to suppress antiwar campaigns and socialist organizing. The restraints on political speech were lifted in the civil rights and anti-Vietnam War era. In 1969, the Supreme Court established a three-part test for incitement in Brandenburg v. Ohio.
First, the speaker must directly advocate a crime. Denigrating a social group is insufficient; the speaker must advocate an offense such as assault or murder. In writing statutes regulating speech, government authorities cannot suppress speech in a way that is content-based and bans only one viewpoint. A municipal statute cannot prohibit only racist speech, or anti-immigrant rhetoric, or even cross-burning with intent to intimidate on racial grounds. That said, more recently, the Supreme Court permitted a Virginia statute banning cross-burning with intent to intimidate because it did not mention racial animus.
Second, the crime being incited must be imminent. How imminent? Courts offer little guidance, but we can glean from a few cases that the time period fluctuates according to the gravity of crime. If the advocated offense is relatively minor, then imminent means more or less “now.” If the offense is grave, such as murder, imminence could stretch to more than a month.
Third, it must be probable that the crime will be committed imminently. How probable? Highly probable, or just more probable than not? We don’t know, as the courts have not told us. This third element remains obscure, and obscurity hinders fair and consistent application of the law.
Incitement is an inchoate crime, which means that the speech act is the crime itself and no bad consequences need ensue. Lacking clear guidance on imminence and probability, prosecutors are often cautious and wait until a crime has been committed, thwarting the preventive rationale for incitement law.
We could easily tighten up the current law of incitement without undermining free-speech protections. Courts could provide more guidance on how imminent and probable the crime being incited must be. Prosecutors could more vigorously indict the most egregious instances of incitement to violence.
Also, the courts could clarify the status of incitement on social media. The Brandenburg test was developed in a pre-internet era and requires updating. Mainstream social media companies such as Facebook and Twitter have hate-speech guidelines that allow them to remove incendiary content. Like private clubs, they set their own terms of service and regulate speech more assiduously than government. For instance, mainstream social media regularly remove content that denigrates racial, religious or immigrant groups, or calls for harm against them.
Fringe platforms such as 4chan and 8chan set no such standards, and they thrive on racist, anti-immigrant and inciting language. They allow far-right communities of hate to coalesce and incite their members to commit mass shootings. Under Section 230 of the Communications Decency Act of 1996, internet providers and social media sites bear no liability for content that third parties post on their platforms. The time has come to challenge this again in court and to pursue civil liability for those platforms that are grossly negligent in regulating the content on their sites.
Social media platforms are like toll roads. They are privately operated providers of a public good — in the one case, transportation; in the other, communication. On toll roads, all the conventional rules of public roads apply. The same should be true of social media. The rules of incitement should apply, and be vigorously enforced, including, if necessary, through extradition.
And in a happy synchronicity, updating the law of incitement and enforcing it on social media platforms will also clarify the rules of speech governing the presidential platform. Let’s help our president out by cleaning up the law of incitement so that the legal jeopardy is clear.
Danielle Allen is a political theorist at Harvard University and a contributing columnist for The Washington Post. Richard Ashby Wilson is the Gladstein Distinguished Chair of Human Rights at the University of Connecticut School of Law.