The Internet’s Future May Be in the Supreme Court’s Hands
December 19, 2022 |The U.S. Supreme Court has agreed to hear two cases – and is considering accepting more – that may expose social media companies to significant financial liabilities and state regulation in the future. All concern the manner in which public discourse occurs online and whether social media companies should have responsibility or control over the content that appears on their platforms.
The issue in the first case, Gonzalez v. Google LLC, No. 21-1333, is whether Section 230(c)(1) of the Communications Decency Act, which states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” immunizes interactive computer services when they make targeted recommendations of information provided by another information content provider, or whether it only limits the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information.
In the second case, Twitter, Inc. v. Taamneh, No. 21-1496, the Court has been asked to decide (1) whether a social media company that provides generic, widely available services to its users, and that regularly works to detect and prevent terrorists from using those services, knowingly provided substantial assistance under 18 U.S.C. § 2333, the Antiterrorism Act of 1990 (ATA), if it allegedly could have taken more meaningful or aggressive action to prevent such use, and (2) whether a social media company whose generic, widely available services were not used in connection with the specific act of international terrorism that injured the plaintiff may be liable for aiding and abetting under Section 2333.
The petition in Moody v. NetChoice, LLC, No. 22-277, and the petition in the related case, NetChoice, LLC v. Moody, No. 22-393, ask the Court to consider whether the First Amendment bars Florida legislation seeking to require social media companies to take certain actions and to impose certain restrictions on how they treat users’ media content. The Court will consider both petitions at its January 6 conference.
As should be clear, if the Court were to rule against social media companies in even one of these cases, the ramifications would be significant for the companies, their advertisers, and their users.
The Google Case
The plaintiffs in the Google case are relatives of Nohemi Gonzalez, an American citizen who was murdered in a November 2015 terrorist attack in Paris, France, for which the Islamic State of Iraq and Syria (ISIS) claimed responsibility. The plaintiffs sued Google LLC under Section 2333(a) of the ATA, which authorizes American nationals injured “by reason of an act of international terrorism” to bring a civil action for treble damages in federal court. It also imposes secondary civil liability on “any person who aids and abets, by knowingly providing substantial assistance” to “an act of international terrorism.”
The plaintiffs alleged, among other things, that Google was liable under the ATA for providing resources and assistance to ISIS through Google’s ownership of the YouTube video-sharing platform. According to the plaintiffs, ISIS and its adherents used YouTube “to disseminate its videos and messages and execute its propaganda, recruitment, and operational campaigns.” They also alleged that, notwithstanding YouTube’s policies prohibiting terrorist content, “[p]rior to the Paris attacks, [YouTube] refused to actively monitor” the site “to block ISIS’s use of” the platform.
In addition, the plaintiffs alleged that YouTube supplies its users with videos that other users have posted. They alleged that a user can “subscribe” to another user’s “channel,” and that YouTube will “distribute” new videos on that channel to the channel’s subscribers. Moreover, they contended that YouTube implements “computer algorithms” to “suggest” to particular users “videos and accounts” that are “similar” to those the user has previously watched and that play automatically when another video ends. According to the plaintiffs, by using the algorithms and related features to “recommend[] ISIS videos,” YouTube “assists ISIS in spreading its message.”
The U.S. District Court for the Northern District of California dismissed the plaintiffs’ complaint, primarily holding that Section 230(c)(1) barred most of the plaintiffs’ claims.
The U.S. Court of Appeals for the Ninth Circuit affirmed, finding that YouTube provides an “interactive computer service” and is eligible for Section 230 protection. The Ninth Circuit declared that most of the plaintiffs’ claims sought “to treat YouTube as a publisher or speaker” of ISIS content within the meaning of Section 230(c)(1) but that YouTube had not acted as an “information content provider” with respect to ISIS videos. The circuit court was not persuaded by the plaintiffs’ contentions that YouTube “develop[s] the ISIS content that appears on YouTube, at least in part,” by recommending ISIS content to other users through its algorithms. Rather, the Ninth Circuit concluded, because YouTube recommends content “based upon users’ viewing history and what is known about the users,” its recommendations reflect the same “core principle” as “a traditional search engine.”
Whether the Supreme Court will agree with the circuit court’s conclusions remains to be seen. If it does not, then social media companies that recommend content – such as games or viral challenges on TikTok or the kind of information alleged in the complaint against Google – may face tremendous liabilities.
The Twitter Case
The plaintiffs in the Twitter case are U.S.-based family members of Nawras Alassaf, a Jordanian citizen killed in January 2017 when Abdulkadir Masharipov fired 120 rounds into a crowd at the Reina nightclub in Istanbul, Turkey. ISIS claimed responsibility for the attack.
The plaintiffs sued three social media companies – Twitter, Facebook, and Google – claiming, among other things, that they were liable under the ATA for aiding and abetting the Reina attack because they were “a critical part of ISIS’s growth.” The plaintiffs alleged that ISIS-affiliated accounts first appeared on Twitter in 2010, and that ISIS has used Facebook and YouTube since at least 2012 and 2013, respectively. The plaintiffs further alleged that ISIS and its affiliated media production and distribution networks have “openly maintained and used official Twitter, Facebook, and YouTube accounts” to recruit, radicalize, and instruct terrorists, fund terrorism, and spread propaganda.
In addition to allegedly hosting ISIS-affiliated accounts and maintaining features by which users may “follow,” “friend,” or “subscribe” to those accounts, the defendants allegedly use “computer algorithms” to “suggest” “content, videos, and accounts” to users based on their information and online activities. According to the plaintiffs, these features permit ISIS to use “Twitter, Facebook, and YouTube as tools to connect with others and promote its terrorist activities.”
The district court dismissed the complaint and the plaintiffs appealed the dismissal of the aiding-and-abetting claim they asserted under the ATA.
The Ninth Circuit reversed the district court. It explained that the plaintiffs’ aiding-and-abetting claim was governed by Halberstam v. Welch, 705 F.2d 472 (D.C. Cir. 1983), which identified six factors relevant to whether a defendant has furnished “substantial assistance” for purposes of an aiding-and-abetting claim: (1) the nature of the act encouraged; (2) the amount of assistance given by the defendant; (3) the defendant’s presence or absence at the time of the tort; (4) the defendant’s relation to the principal; (5) the defendant’s state of mind; and (6) the period of the defendant’s assistance. After considering these factors and finding among other things that the plaintiffs alleged that the defendants provided services that were central to ISIS’s growth and expansion and that this assistance was provided over many years, the Ninth Circuit held that the plaintiffs adequately stated a claim for aiding-and-abetting liability under the ATA.
The United States, among others, has filed an amicus brief asking the Supreme Court to reverse the Ninth Circuit in this case. The government argues that the plaintiffs have not plausibly alleged that the defendants “aid[ed] and abet[ted], by knowingly providing substantial assistance” to the Reina attack. According to the government, the knowing-and-substantial assistance requirement is “less likely to be satisfied” where a defendant provided only routine business services in an ordinary manner, was remote from the unlawful act that injured the plaintiff or is accused of aiding and abetting another’s conduct through inaction.
If the Supreme Court rejects that argument and upholds the Ninth Circuit’s decision, the complaint may still have to pass muster under Section 230 but affirming the circuit court would be a groundbreaking ruling under the ATA.
The Florida Law
When Florida Governor Ron DeSantis signed S.B. 7072 into law in May 2021, he indicated that he intended “to Stop the Censorship of Floridians by Big Tech,” as I pointed out in this space several months ago. See Shari Claire Lewis, Circuits Split Over States’ Right To Regulate Social Media Platforms, NYLJ (Aug. 15, 2022). Now, the Florida law is at issue in Moody v. NetChoice, LLC and NetChoice, LLC v. Moody.
The principal provisions of the Florida law can be divided into three categories: (1) content moderation restrictions (including that social media platforms may not “deplatform a candidate for office”); (2) disclosure obligations (including that social media platforms must provide a “thorough rationale” for each content moderation decision they make); and (3) a user data requirement (providing that social media platforms must allow a deplatformed user to “access or retrieve all of the user’s information, content, material, and data for at least 60 days” after the user receives notice of deplatforming).
After industry members challenged the law, the U.S. District Court for the Northern District of Florida issued a preliminary injunction in favor of the plaintiffs, holding that S.B. 7072’s provisions implicate the First Amendment because they restrict social media platforms’ constitutionally protected exercise of “editorial judgment.”
The U.S. Court of Appeals for the Eleventh Circuit substantially affirmed the district court’s issuance of the preliminary injunction, finding it substantially likely that S.B. 7072’s content moderation restrictions and its requirement that platforms provide a thorough rationale for every content moderation action violated the First Amendment.
Now, the Supreme Court has the opportunity to decide whether the First Amendment bars the Florida law – or whether it does not. If the Court rules in favor of Florida, one can expect that more states will enact legislation affecting the operations of social media companies. Such a ruling, together with the Court’s upcoming decisions in the Google and Twitter cases, may significantly reshape social media. Stay tuned!
Reprinted with permission from the December 19, 2022, issue of the New York Law Journal©, ALM Media Properties, LLC. Further duplication without permission is prohibited. All rights reserved.