Washington, D.C.—Representative David N. Cicilline, Chair, and Representative Ken Buck, Ranking Member, of the House Judiciary Committee’s Subcommittee on Antitrust, Commercial, and Administrative Law today announced the Platform Integrity Act, a bipartisan bill to correct judicial misinterpretation of a provision of the Communications Decency Act, 47 U.S.C. 230(c)(1), by recognizing that online platforms may be held responsible for the content that they promote on their platforms.
Section 230(c)(1) protects interactive computer service providers, such as social media companies and other online platforms, from civil liability for third-party content posted on their platforms. Some courts have erroneously expanded that protection to content that online platforms have affirmatively promoted or suggested to their users. This bill corrects that error. It does not create any new cause of action or basis for liability; it simply clarifies that section 230(c)(1)’s liability exclusion does not extend to content that platforms themselves take an active role in proliferating. The Platform Integrity Act makes clear that platforms have no excuse for amplifying extremism online.
“Section 230(c)(1)’s shield protecting online platforms from liability for third-party content was never intended to give those platforms free rein to spread extremism, hate, and terrorism online. When an online platform is the one that recommends violent and illegal content, it should be held responsible for these actions. This bill removes an unwarranted obstacle preventing those injured by the promotion of extremist content from being able to pursue recovery from the platform,” said Congressman Cicilline.
“Clarifying this language and making it clear that Section 230(c)(1) does not grant immunity to online platforms that choose to promote illegal, violent, or extreme online content is critical to our society. Currently, Big Tech platforms are able to avoid any serious liability or accountability for their actions. This legislation will fix the misinterpretation of Section 230 immunity and ensure that Big Tech can no longer profit off of promoting terrorists’ content on their platforms,” said Congressman Buck.
The Platform Integrity Act would:
- Offer a simple and common-sense clarification of the scope of 47 U.S.C. 230(c)(1) by removing a bar to recovery for victims who have suffered harm from acts of terrorism, hate, or extremism enabled by online platforms’ content suggestions.
- Reject the judicial misinterpretation of 47 U.S.C. 230(c)(1) whereby courts have concluded, for example, that the statute bars victims of terrorist attacks from seeking relief from a social-media company for its proactive role connecting the perpetrators through friend- and content-suggestion algorithms.
- Adopt the correct interpretation of the statute reflected in the separate opinion of the late Honorable Robert Katzmann in Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019), wherein he concluded that “it strains the English language” to construe 47 U.S.C. 230(c)(1) “to say that in targeting and recommending [extremist] writings to users,” “thereby forging connections” and “developing new social networks,” online platforms are protected from liability by the statute.
- Apply only to content that the platform actively promotes, leaving in place Section 230(c)(2)’s protection of platforms’ good-faith application of terms of service and community guidelines.