Facebook Finds Refuge in Section 230, at Least for Now

August 20, 2019 | Privacy, Data & Cyber Law

Section 230(c)(1) of the Communications Decency Act of 1996 (CDA) was enacted to facilitate the growth of the internet, by immunizing internet service providers (ISPs) from liability in connection with third party content that appeared on their “interactive computer service.” It states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The benefit of immunizing ISPs from liability for third party content that violates widely shared social values or laws has been increasingly questioned – especially when that content promotes terrorism, sexual exploitation, or hate speech.

Recently, the U.S. Court of Appeals for the Second Circuit ruled that Section 230(c)(1) shielded Facebook, Inc., from civil liability for claims brought under federal law by U.S. citizen victims, and their representatives, of certain terrorist attacks committed by Hamas in Israel. The plaintiffs contended that Facebook unlawfully provided Hamas, a U.S.-designated foreign terrorist organization, with a communications platform that enabled those attacks, particularly in connection with Facebook’s use of its famous algorithm to serve content to specific Facebook users.

The circuit court’s decision, in Force v. Facebook, Inc., No. 18-397 (2d Cir. July 31, 2019), amounted to a big victory for Facebook and, implicitly, for other social media platforms. It is worth noting, however, that the Second Circuit’s opinion was not unanimous. Moreover, there may be bipartisan political will developing in Washington to amend Section 230(c)(1) to weaken the immunity it affords online publishers such as Facebook, which could limit the effect of Force and similar decisions that have been issued by courts across the country.

The Complaint

The plaintiffs’ complaint described terrorist attacks by Hamas against five Americans in Israel between 2014 and 2016. The plaintiffs alleged that Hamas used Facebook to post content that encouraged terrorist attacks in Israel during this period, which the attackers allegedly saw. The plaintiffs alleged that Hamas also used Facebook to celebrate these attacks and others, to transmit political messages, and to generally support further violence against Israel. The plaintiffs asserted that the perpetrators were able to view this content because, although Facebook’s terms and policies barred use by Hamas and other designated foreign terrorist organizations, Facebook failed to remove the “openly maintained” pages and associated content of certain Hamas leaders, spokesmen, and other members.

The plaintiffs also alleged that Facebook’s algorithms directed such content to the personalized newsfeeds of the individuals who harmed the plaintiffs.

Thus, the plaintiffs claimed, Facebook enabled Hamas “to disseminate its messages directly to its intended audiences” and to “carry out the essential communication components of [its] terror attacks.”

The plaintiffs contended that, under federal law including the Anti-Terrorism Act’s civil remedies provision, 18 U.S.C. § 2333, Facebook was civilly liable for aiding and abetting Hamas’ acts of international terrorism; conspiring with Hamas in furtherance of acts of international terrorism; providing material support to terrorists; and providing material support to a designated foreign terrorist organization.

The U.S. District Court for the Eastern District of New York held that Section 230(c)(1) foreclosed the plaintiffs’ claims because they impermissibly involved “treat[ing]” Facebook “as the publisher or speaker of any information provided by” Hamas. The plaintiffs appealed to the Second Circuit, arguing that the district court improperly held that Section 230(c)(1) barred their claims. They contended that their claims did not treat Facebook as the “publisher” or “speaker” of content provided by Hamas, as Section 230(c)(1) requires for immunity. They similarly contended that Facebook contributed to that content through its algorithms.

The Second Circuit’s Decision

The Second Circuit, in a decision by Circuit Judge Christopher F. Droney, concluded that the district court had properly applied Section 230(c)(1) to the plaintiffs’ federal claims.

The circuit court explained that, subject to certain delineated exceptions, Section 230(c)(1) shields a defendant from civil liability when: (1) it is a “provider or user of an interactive computer service”; (2) the plaintiff’s claims “treat[]” the defendant as the “publisher or speaker” of information; and (3) that information is “provided by” an “information content provider” other than the defendant interactive computer service.

The Second Circuit pointed out that the parties agreed that Facebook met the first part of the test as a provider of an “interactive computer service.” It then considered the other two parts.

First, it ruled that the plaintiffs’ claims implicated Facebook as a “publisher” of information. The court noted that the plaintiffs sought to hold Facebook liable for “giving Hamas a forum with which to communicate and for actively bringing Hamas’ message to interested parties,” and it ruled that that alleged conduct by Facebook fell “within the heartland” of what it meant to be the “publisher” of information under Section 230(c)(1). So, too, the court continued, did Facebook’s alleged failure to delete content from Hamas members’ Facebook pages.

The Second Circuit rejected the plaintiffs’ argument that Facebook did not act as the publisher of Hamas’ content within the meaning of Section 230(c)(1) because it used algorithms to suggest content to users, resulting in “matchmaking.” As examples, the plaintiffs alleged that Facebook’s “newsfeed” used algorithms that predicted and showed third-party content that was most likely to interest and engage users and that Facebook’s algorithms also provided “friend suggestions” based on analysis of users’ existing social connections on Facebook and other behavioral and demographic data.

The court disagreed with the plaintiffs’ contention that Facebook’s use of algorithms rendered it a non-publisher. It stated that there was “no basis” for concluding that an interactive computer service was not the “publisher” of third-party information when it used tools such as algorithms designed to match that information with a consumer’s interests. The Second Circuit said that “arranging and distributing third-party information” inherently formed “connections” and “matches” among speakers, content, and viewers of content, whether in interactive internet forums or in more traditional media – “an essential result of publishing.” It added that accepting the plaintiffs’ argument would “eviscerate” Section 230(c)(1) in that a defendant interactive computer service would be ineligible for Section 230(c)(1) immunity by virtue of simply organizing and displaying content exclusively provided by third parties.

Addressing the final part of the three-part test, the court ruled that Facebook was not itself an “information content provider.” It rejected the plaintiffs’ contention that Facebook was a creator or developer, even in part, of the terrorism-related content upon which the plaintiffs’ claims relied because its algorithms developed Hamas’ content by directing it to users who were most interested in Hamas and its terrorist activities, without those users necessarily seeking that content. The court stated that a defendant must have directly and “materially” contributed to what made the content itself “unlawful” to be considered to be an information provider. It then ruled that the plaintiffs’ allegations about Facebook’s conduct did not render it responsible for the Hamas-related content, noting that Facebook did not edit (or suggest edits) for the content that its users – including Hamas – published. That practice, according to the court, was consistent with Facebook’s terms of service, which provide that a Facebook user “own[s] all of the content and information [the user] post[s] on Facebook, and [the user] can control how it is shared through [the user’s] privacy and application settings.”

The court also ruled that “[m]erely arranging and displaying others’ content” to users of Facebook through algorithms, even if the content was not actively sought by those users, was “not enough to hold Facebook responsible as the ‘develop[er]’ or ‘creat[or]’ of that content.” That Facebook’s algorithms made content more “visible,” “available,” and “usable” did not trouble the court, which concluded that it was “an essential part of traditional publishing” that did “not amount to ‘developing’ that information within the meaning of Section 230.”

A Dissent
Chief Judge Robert A. Katzmann dissented from the court’s decision on its treatment of Facebook’s friend- and content-suggestion algorithms. In Chief Judge Katzmann’s view, the CDA did not bar claims based on the connections Facebook’s algorithms make between individuals. Chief Judge Katzmann wrote that, as applied to Facebook’s algorithms, the plaintiffs’ claims did not seek to punish Facebook for the content others posted but “would hold Facebook liable for its affirmative role in bringing terrorists together.”

Chief Judge Katzmann declared that, in his opinion, when it came to Facebook’s algorithms, the plaintiffs’ causes of action did “not run afoul of the CDA.”

Chief Judge Katzmann added that “Congress may wish to revisit the CDA to better calibrate the circumstances where such immunization is appropriate and inappropriate in light of congressional purposes.” Observing that the failure to remove terrorist content was immunized under Section 230 “as currently written,” Chief Judge Katzmann pointed out that, “[s]hielding internet companies that bring terrorists together using algorithms could leave dangerous activity unchecked.” He added that, “[w]hether, and to what extent, Congress should allow liability for tech companies that encourage terrorism, propaganda, and extremism is a question for legislators, not judges.”

Congress may be on the verge of tackling that question.

The Future

Washington politicians of both parties seem to be taking note of “hate speech” on social media platforms, and of the broad immunity afforded by Section 230(c)(1) to social media platforms. For example, both Speaker Nancy Pelosi (D-Cal.) and Senator Ted Cruz (R-Tex.) have questioned whether Section 230(c)(1) should be amended or rewritten to address the proliferation of hate speech. Conversely, others recognize the importance of Section 230(c)(1) for free speech on the internet without government interference. For example, the Electronic Frontier Foundation describes Section 230 as “one of the most valuable tools for protecting freedom of expression and innovation on the Internet.”

Social media sites can find much to cheer in the Second Circuit’s decision in Force – at least for now. Whether Congress ultimately will act to modify Section 230 and, in effect, to limit the reach of Force remains to be seen.

Reprinted with permission from the August 19, 2019 issue of the New York Law Journal. © ALM Media Properties, LLC.  Further duplication without permission is prohibited.  All rights reserved.

Share this article:

Related Publications


Get legal updates and news delivered to your inbox