Supreme Court to Consider Tech Companies’ Liability for Terrorism


Photo by Brett Jordan on Unsplash

On February 21 and 22, the Supreme Court will hear oral argument in two cases, Gonzalez v. Google and Twitter v. Taamneh, that raise questions about how a civil cause of action set forth in the Anti-Terrorism Act (ATA) applies when known terrorist organizations use social media services. Both cases involve terrorist attacks (in Paris and Istanbul, respectively) by members of ISIS. In both cases, plaintiffs allege that social media companies helped ISIS recruit new members by operating computer algorithms that amplified ISIS content and promoted that content to social media users.

Gonzalez asks whether platforms are entitled to Section 230 immunity when they provide targeted recommendations to users. Taamneh asks whether a company’s failure to take sufficiently aggressive steps to prevent use of its platform by a known terrorist organization gives rise to “aiding and abetting” liability under the ATA for a subsequent act of terrorism. The Court’s opinions in these cases, if not narrowly crafted, could dramatically change the way that internet platforms distribute content to social media users.

Gonzalez v. Google

On November 13, 2015, several diners were killed in a bistro as part of a series of attacks perpetrated by ISIS members throughout Paris, France. The estate and family members of one of the victims, Nohemi Gonzalez, sued Google under the ATA, alleging that the company (through YouTube) aided and abetted the terrorist activities of ISIS. The question before the Court is whether the plaintiffs’ action is barred by Section 230(c) of the Communications Decency Act, which grants internet companies immunity from civil liability for content posted on their platforms by third parties. Commentators have praised Section 230 as the statute that created the modern internet. The central issue in Gonzalez is whether a platform is entitled to immunity under Section 230 when it makes targeted recommendations to social media users in an effort to induce them to view third-party content.

Petitioners in Gonzalez urge the Court to draw a line between passively hosting content posted by a third party and actively recommending that content to other social media users. In their view, internet companies are entitled to Section 230 immunity when they passively host content posted by a third party. However, companies are not entitled to immunity under the statute when they “push” recommended content to particular users. Recommendations, they argue, are not third-party content, but rather unsolicited information that platforms themselves create to induce users to spend more time on the platform. Therefore, companies are not entitled to immunity if they promote terrorist content by operating computer algorithms that encourage targeted users to view that terrorist content.

In contrast, respondents argue that it is impossible to draw a practical distinction between “passively hosting” content and “actively recommending” content because the latter is an inextricable part of the platforms’ role in hosting and displaying content. The companies contend that the process of sorting and organizing information in a manner that is calculated to attract user attention is an integral part of their business operations. If the Court rules that Section 230 does not grant them immunity for these types of operations, the companies would be forced to choose between: (a) losing Section 230 immunity altogether (which would be directly contrary to congressional intent); or (b) radically altering their business practices in a way that would adversely affect the daily experience of social media users.

In a 2019 case, Dyroff v. Ultimate Software Group, Inc., the Ninth Circuit rejected an argument similar to petitioners’ argument in Gonzalez. At issue in Dyroff was an email message with a link inviting the end-user to find certain information on its website. The Court held that the emails were “tools meant to facilitate the communication” and thus were within the scope of Section 230 protection. In Gonzalez, the Ninth Circuit applied this precedent and cited a Second Circuit opinion, Force v. Facebook, in which the Second Circuit held that Facebook is entitled to Section 230 immunity for its friend suggestions. Both Gonzalez (in the Ninth Circuit) and Force (in the Second Circuit) contain vigorous dissents arguing that recommendations by platforms go well beyond “traditional editorial functions,” as Judge Katzmann put it in his dissent in Force.

A decision for the petitioners in Gonzalez would substantially restrict the scope of immunity under Section 230. It would probably also force many existing internet companies to make significant changes to their business models. Moreover, any attempt to draw a line between “passively hosting” content and “actively recommending” content would likely create ongoing challenges for lower courts in making case-specific determinations. In the words of respondents, the internet could “devolve into a disorganized mess and a litigation minefield.” On the other hand, a ruling for the petitioners could force social media companies to make major improvements in their algorithms and enhance their capacity to weed out socially harmful content. In the most optimistic scenario, such a decision might trigger changes in business practices that enhance the experience of social media users by limiting their exposure to disturbing, unpleasant content.

Twitter v. Taamneh

On January 1, 2017, a terrorist attack, for which ISIS claimed responsibility, killed and injured several people in a nightclub in Istanbul, Turkey. The relatives of one victim commenced this action against Twitter, Facebook, and Google (as owner of YouTube) under the Justice Against Sponsors of Terrorism Act (JASTA), which amended the ATA to create civil liability for aiding and abetting certain terrorist acts.

The complaint alleged that ISIS’s use of defendants’ platforms was well-known and well-publicized. Moreover, defendants had occasion to review particular ISIS content. For example, Google had to “review and approve” certain content before it permitted revenue-sharing for advertisements. By allowing such content to remain on their platforms and recommending that content to target audiences, plaintiffs alleged, the platforms knowingly supported the global proliferation of ISIS, which ultimately resulted in the Istanbul attack.

The immediate question before the Court is the extent to which internet platforms may be held liable under the ATA for aiding and abetting terrorist organizations such as ISIS. Section 2333(d)(2) provides a civil cause of action against a defendant who “aids and abets … the person who committed … an act of international terrorism.” Aiding and abetting, under the statute, involves “knowingly providing substantial assistance.” Unfortunately, the statutory text is ambiguous. Petitioners argue that liability applies only when a defendant provides substantial assistance to a particular “act of international terrorism.” Respondents counter that liability attaches when a defendant provides substantial assistance to the “person” (or organization or enterprise) who “committed” such an act of terrorism. In other words, the central dispute involves the object of assistance. Are plaintiffs required to prove that the defendant assisted a particular terrorist attack, or does it suffice to show that the defendant aided the organization responsible for a terrorist attack? Based on a plain meaning approach to statutory interpretation, either interpretation is plausible.

Resolving this issue would also address the requisite knowledge element, a point raised forcefully by petitioners. Must plaintiffs allege that the defendants knew they were assisting a specific act of terrorism? Or does it suffice to allege that defendants knew they were disseminating ISIS content? The Taamneh plaintiffs allege that defendants knew ISIS used their platforms and failed to take sufficient steps to prevent such use. In their view, if plaintiffs had to allege that defendants had direct knowledge of (and inaction with respect to) the specific attack, it would be almost impossible for any plaintiff to bring a claim under Section 2333 that could survive the pleading stage. That would contradict Congress’s intent for JASTA to provide the “broadest possible basis” of relief for civil litigants (130 Stat. at 852 (§2(b)).

Congress specifically provided in JASTA that the D.C. Circuit’s decision in Halberstam v. Welch provides the “proper legal framework” for analyzing aiding-and-abetting liability. Halberstam sets forth three elements for a civil aiding-and-abetting cause of action: (1) there must be a wrongful act committed by the person assisted by the defendant, (2) at the time of assistance, the defendant must be “generally aware of his role as part of an overall illegal or tortious activity,” and (3) the defendant must have knowingly assisted the “principal violation.” Arguably, this standard could support either petitioners or respondents in this case. The petitioners insist that courts should construe “principal violation” in the third element to require assistance to a specific act of terrorism. Thus, if the defendants were unaware that particular social media users were involved in planning a specific terrorist attack, then they cannot possibly be liable for failing to block or remove the offending accounts.

In contrast, respondents argue that Halberstam did not create such a narrow standard. The facts of Halberstam involved a series of burglaries committed by a principal actor unbeknownst to his live-in partner. One of the burglaries resulted in the murder of Dr. Halberstam in his home. Halberstam’s relatives brought an aiding-and-abetting action against the live-in partner. The court held that, while she did not know about the burglaries in advance, the district court properly inferred a “tacit accord” with the criminal enterprise from the partner’s unquestioning, years-long participation selling the stolen antiques and enjoying the proceeds. Having assisted the principal violation, she was held liable for the murder as a reasonably foreseeable consequence of the burglaries. The respondents thus argue that Halberstam supports civil liability in cases where the secondary actor did not know the specifics of the crime she assisted. Moreover, Halberstam endorsed aiding-and-abetting liability for the reasonably foreseeable consequences of the principal’s actions. Just as the partner could be held liable for the unplanned murder of Dr. Halberstam, respondents argue, social media companies can be held liable for terrorist attacks resulting from the exploitation of their platforms by terrorist organizations because the companies recommended terrorist content to their users.

Thus, petitioners and respondents in Taamneh present very different interpretations of Halberstam. Respondents argue that Halberstam merely requires proof that the companies knowingly aided the growth of ISIS by recommending ISIS content to particular users. Petitioners argue that Halberstam supports their view that Section 2333 requires proof that the companies knowingly aided the specific ISIS attack in Istanbul that forms the basis of the complaint in Taamneh.


For the plaintiffs in either case to win on the merits, the Supreme Court must rule in their favor on both the ATA issue (Taamneh) and the 230 issue (Gonzalez). The lower courts did not reach the Section 230 issue in Taamneh. However, if the Court resolves the Section 230 issue in favor of internet companies in Gonzalez, it could duck the ATA issue in Taamneh because the Taamneh claims would also be barred by Section 230. Conversely, if the Court rules in favor of the companies in Taamneh, it could duck the Section 230 issue in Gonzalez on the grounds that the Gonzalez plaintiffs failed to state a claim. We refrain from making any prediction about the likely outcome until we hear the oral arguments in both cases.