Online hate speech

Online hate speech is a type of speech that takes place online with the purpose of attacking a person or a group based on their race, religion, ethnic origin, sexual orientation, disability, or gender.[1]

A Facebook user whose account has been suspended due to violating hate speech guidelines. "Facebook Hates Google+ ???" by Frederick Md Publicity is licensed with CC BY 2.0. To view a copy of this license, visit

Online hate speech is the expression of conflicts between different groups within and across societies. Online hate speech is a vivid example of how the Internet brings both opportunities and challenges regarding the freedom of expression and speech while also defending human dignity.[2]

Multilateral treaties such as the International Covenant on Civil and Political Rights (ICCPR) have sought to define its contours. Multi-stakeholders processes (e.g. the Rabat Plan of Action) have tried to bring greater clarity and suggested mechanisms to identify hateful messages. Yet, hate speech is still a generic term in everyday discourse, mixing concrete threats to individuals and/or groups with cases in which people may be simply venting their anger against authority. Internet intermediaries—organizations that mediate online communication such as Facebook, Twitter, and Google—have advanced their own definitions of hate speech that bind users to a set of rules and allow companies to limit certain forms of expression. National and regional bodies have sought to promote understandings of the term that are more rooted in local traditions.[2]

The Internet's speed and reach makes it difficult for governments to enforce national legislation in the virtual world. Social media is a private space for public expression, which makes it difficult for regulators. Some of the companies owning these spaces have become more responsive towards tackling the problem of hate speech online.[2]

Politicians, activists, and academics discuss the character of online hate speech and its relation to offline speech and action, but the debates tend to be removed from systematic empirical evidence. The character of perceived hate speech and its possible consequences has led to placing much emphasis on the solutions to the problem and on how they should be grounded in international human rights norms. Yet this very focus has also limited deeper attempts to understand the causes underlying the phenomenon and the dynamics through which certain types of content emerge, diffuse and lead—or not—to actual discrimination, hostility or violence.[2]

Online hate speech has been on the rise since the start of 2020, with COVID-19 tensions, Anti-Asian rhetoric, ongoing racial injustice, mass civil unrest, violence, and the 2020 Presidential Election. Yet, many instances of hate speech have been refuted with the First Amendment, which allows online hate speech to continue.