Terror kills, inciting words can kill, but what about online platforms? In recent years, social networks have turned into a new arena for incitement. Terror organizations operate active accounts on social networks. They incite, recruit, and plan terror attacks by using online platforms. These activities pose a serious threat to public safety and security. Online intermediaries, such as Facebook, Twitter, YouTube, and others provide online platforms that make it easier for terrorists to meet and proliferate in ways that were not dreamed of before. Thus, terrorists are able to cluster, exchange ideas, and promote extremism and polarization. In such an environment, do platforms that host inciting content bear any liability? What about intermediaries operating internet platforms that direct extremist and unlawful content at susceptible users, who, in turn, engage in terrorist activities? Should intermediaries bear civil liability for algorithm-based recommendations on content, connections, and advertisements? Should algorithmic targeting enjoy the same protections as traditional speech? This Article analyzes intermediaries’ civil liability for terror attacks under the anti-terror statutes and other doctrines in tort law. It aims to contribute to the literature in several ways. First, it outlines the way intermediaries aid terrorist activities either willingly or unwittingly. By identifying the role online intermediaries play in terrorist activities, one may lay down the first step towards creating a legal policy that would mitigate the harm caused by terrorists’ incitement over the internet. Second, this Article outlines a minimum standard of civil liability that should be imposed on intermediaries for speech made by terrorists on their platforms. Third, it highlights the contradictions between intermediaries’ policies regarding harmful content and the technologies that create personalized experiences for users, which can sometimes recommend unlawful content and connections. This Article proposes the imposition of a duty on intermediaries that would incentivize them to avoid the creation of unreasonable risks caused by personalized algorithmic targeting of unlawful messages. This goal can be achieved by implementing effective measures at the design stage of a platform’s algorithmic code. Subsequently, this Article proposes remedies and sanctions under tort, criminal, and civil law while balancing freedom of speech, efficiency, and the promotion of innovation. The Article concludes with a discussion of complementary approaches that intermediaries may take for voluntarily mitigating terrorists’ harm.
Lavi, Michal, Do Platforms Kill? (2020). Harvard Journal of Law and Public Policy, volume 43, no 2, 2020.