Civil Liability for Internet Companies to Help Prevent International Terrorism

 

Photo by Sara Kurfeß on Unsplash

On May 18, the Supreme Court issued its much-anticipated decisions in Twitter v. Taamneh and Gonzales v. Google. Both cases involved terrorist attacks by members of ISIS. In both cases, plaintiffs alleged that social media companies helped ISIS recruit new members by amplifying ISIS content and promoting that content to social media users. In both cases, plaintiffs sued social media companies under the Justice Against Sponsors of Terrorism Act (JASTA), a federal statute that creates civil liability for aiding and abetting terrorist acts.

In Taamneh, the Court held that plaintiffs failed to state a claim under JASTA. In Gonzales, the Court granted cert. to consider the scope of immunity under Section 230(c) of the Communications Decency Act. However, the Court ducked the Section 230 immunity issue and remanded Gonzalez to the Ninth Circuit to reconsider its decision in light of Taamneh.

Several other scholars have published commentary on the Court’s decisions in Taamneh and Gonzales. In this post, I focus on a potential congressional response. When it enacted JASTA as an amendment to the Anti-Terrorism Act (ATA), Congress emphasized its intent to provide the “broadest possible basis” of relief for civil litigants (130 Stat. at 852 (§2(b)). However, Congress did not anticipate potential claims against internet companies.

Since Taamneh effectively bars liability for social media companies under JASTA, Congress should pass a new statute that is specifically designed to address the liability of social media companies for aiding and abetting acts of international terrorism. The goal should be not only to provide remedies for victims, but also to conscript social media companies as partners in the global effort to prevent, to the maximum extent feasible, future acts of international terrorism.

The Basic Proposal

Congress should create a statutory duty for social media companies to prevent transmission of messages that meet two criteria: 1) the message would be understood by ordinary listeners as incitement or inducement to commit an act of international terrorism; and 2) there is a significant risk that recipients of the message will commit such a crime. The statute should create a civil cause of action against companies that violate their duty to block transmission of such messages. It should also carve out an exception to Section 230 immunity for claims within the scope of the statute.

The statute should create a duty for companies to remove prohibited content within 24 hours after it is posted. However, the statute should preserve immunity from civil liability for any company that makes a reasonable, good faith effort to comply with the content removal obligation but is unable to do so. Finally, the statute should create a technical working group with representatives from government and industry to develop a set of best practices for implementing the duty to remove offending content. The statute should include a safe harbor provision for companies that comply with recommended best practices. The remainder of this post elaborates key elements of this proposal.

Which Companies?

To implement this proposal, Congress would need to consider which companies are covered. There is a compelling argument for exempting small companies—perhaps those with fewer than fifty million registered users—because compliance with the statutory duty to block offending content would create an excessive burden for smaller companies. Congress should limit the cause of action to companies that operate social media platforms; that will require a definition of the term “social media platform.” The current statutory language in section 230, which refers to providers of “interactive computer service[s],” sweeps far too broadly.

Understood by Ordinary Listeners

Legal tests for offensive or dangerous speech can be framed in terms of the intent of the speaker, the likely consequences of the speech, or the actual content of the words or images conveyed. For the proposed statute to be effective, companies must be able to implement the proposal through a combination of automated filters and human content moderation. It is difficult for filters to screen content based on the intent of the speaker. It is somewhat easier (although not a trivial task) to screen content based on the actual content of the words or images. The “understood by ordinary listeners” test effectively directs companies to focus on the actual content, rather than the intent of the speaker, to determine which messages should be blocked.

Incitement or Inducement

The phrase “incitement or inducement” is similar to the classic First Amendment test from Brandenburg v. Ohio, which permits governments to “forbid or proscribe advocacy of the use of force . . . [that] is directed to inciting or producing imminent lawless action.” The word “inducing” is similar to the word “producing” in the Brandenburg test, but “inducing” is more precise. The cause of action (and the exception to Section 230 immunity) should cover both incitement and inducement because they are two different ways that people can use internet speech to encourage terrorist acts.

Significant Risk

Brandenburg exempts from First Amendment protection speech that is “likely to incite or produce” lawless action. The question arises: how likely is “likely?” In this context—where we are concerned about inciting or inducing an act of international terrorism—a “more likely than not” test is inappropriate because the gravity of the potential harm is so extreme. Chief Justice Vinson, writing for a plurality in Dennis v. United States, quoted Judge Learned Hand approvingly as follows: “In each case (courts) must ask whether the gravity of the ‘evil,’ discounted by its improbability, justifies such invasion of free speech as is necessary to avoid the danger.” In situations where there is a risk that speech transmitted over the internet may incite or induce someone to commit an act of international terrorism, the potential harm is sufficiently grave that restrictions on free speech are justified even if the probability that the harm will ensue is well below fifty percent. The “significant risk” formulation is intended to convey this idea.

Evaluating the risk that a particular message will incite or induce international terrorism necessarily depends on context. Some may object that it would be unduly burdensome to require social media companies to perform that type of risk assessment. However, the proposed statute does not require perfection. Companies could avoid civil liability by making a reasonable, good faith effort to block messages that create a significant risk that recipients will commit an act of international terrorism. Given the capacity of large social media companies to cause widespread harm by disseminating terrorist messages to millions of people, and given the vast resources at their disposal, it is not unduly burdensome to require them to make reasonable efforts. In actual civil litigation, factfinders could rely partly on industry experts to determine whether a particular company made a “reasonable, good faith effort” in a particular case.

Imminence

The Brandenburg formulation permits governments to forbid speech only if that speech tends to incite or produce “imminent lawless action.” Although the imminence requirement is an important component of the Brandenburg test, the Supreme Court has relaxed the imminence requirement in two First Amendment cases where dangerous speech involved a potential threat of violence. First, in Virginia v. Black, the Court held that a state may prohibit “statements where the speaker means to communicate a serious expression of an intent to commit an act of unlawful violence to a particular individual or group of individuals,” even if such statements do not present an imminent threat of violence. Then, in Holder v. Humanitarian Law Project, the Court upheld the validity of the federal statute barring material support to terrorism, even though the statute criminalized speech that did not present an imminent threat of violence. The Court’s decisions in Black and Humanitarian Law Project support the validity of a cause of action that does not include an imminence requirement.

From a policy standpoint, there are three reasons why a cause of action for aiding and abetting international terrorism should not include an imminence requirement. First, the “significant risk” element of the proposed statute already limits its reach. The inclusion of an imminence requirement in addition to the “significant risk” requirement would undermine the preventative goal of the statute. Second, the proposed cause of action addresses only acts of international terrorism. In accordance with Judge Hand’s formulation, “the gravity of the ‘evil,’ . . . justifies such invasion of free speech as is necessary to avoid the danger.” Here, the scale of potential violence justifies a relaxation of the imminence requirement.

Finally, the proposed statute addresses only civil liability, not criminal liability. In contrast, the Supreme Court developed the First Amendment test for incitement in a series of criminal cases. The civil/criminal distinction is important for two reasons. First, the consequence of imposing criminal punishment on individuals (loss of liberty) is more severe than the consequence of imposing civil liability on companies (loss of money). Therefore, the government should be required to meet a higher burden for criminal punishment than for civil liability. Second, the proposed statute promotes a vitally important public safety function by helping to prevent large-scale violence. If the imminence requirement is not relaxed in this context, the statute will not be effective in promoting that public safety objective.

24-Hour Removal Provision

The proposed statute should require internet companies to make a reasonable, good faith effort to block or remove offending content within 24 hours after it is posted. This proposal is similar to a German law, the Netzwerkdurchsetzungsgesetz (“NetzDG”), which took effect in October 2017. Subject to certain exceptions, the NetzDG requires companies that operate internet platforms to remove or block “access to content that is manifestly unlawful within 24 hours of receiving the complaint.” Leading internet companies have already taken substantial steps to enhance their technical capacity to comply with such 24-hour takedown requirements. In June 2017, Facebook, Microsoft, Twitter, and YouTube announced the formation of the Global Internet Forum to Counter Terrorism (GIFCT). A shared database“allows member companies to . . . identify and remove matching content — videos and images — that violate our respective policies or, in some cases, block terrorist content before it is even posted.” The existing database provides a solid foundation for companies to screen content that constitutes incitement or inducement to commit acts of international terrorism.

Of course, no screening mechanism is perfect. In March 2019, a gunman killed 51 people in Christchurch, New Zealand and live streamed it on Facebook. Despite Facebook’s efforts to remove the video, thousands of individual social media users were able to disseminate the video more rapidly than the company could act to block transmission. It is questionable whether companies will ever be able to block completely transmission of content that goes viral on the internet. Therefore, the proposed statutory provision simply requires companies to make reasonable, good faith efforts. Application of the reasonableness requirement will necessarily evolve over time as companies improve their capacities to address these types of problems.

Technical Working Group and Safe Harbor

Finally, the proposed statute should establish a technical working group with government and industry representatives to develop improved technical solutions to the problem of screening and blocking content that constitutes incitement or inducement to commit acts of international terrorism. The main goal would be to enhance the ability of companies to prevent individuals from using their platforms to incite or induce terrorist acts. If the working group makes significant progress over time, one side effect would be to ratchet up the reasonableness standard that companies are required to satisfy to avoid civil liability. The statute should include a safe harbor provision so that any company that complies with the best-practice recommendations of the working group would be presumptively immune from civil liability.

Conclusion

Congress did not envision liability for internet companies when it enacted JASTA. For that reason, among others, the Supreme Court was right to reject civil liability for such companies under the existing statute. However, international terrorism remains an ongoing threat and terrorists continue to exploit social media platforms to recruit new members and incite violence. Leading social media companies have taken some steps to address these problems, but they could do more. Although partisan divisions in Congress have inhibited efforts to regulate social media, there is broad bipartisan agreement on the need to combat international terrorism. In light of recent Supreme Court decisions, now is the time for Congress to enact a new statute to impose civil liability on social media companies that fail to implement best practices to prevent individuals from using their platforms to incite or induce acts of international terrorism.