In India, religious texts, social customs, rituals, and everyday cultural practices legitimise the use of hate speech against marginalised caste groups. Notions of "purity" of “upper-caste” groups, and conversely of "pollution" of “lower-caste” groups, have made the latter subject to discrimination, violence and dehumanisation. These dynamics invariably manifest online, with social media platforms becoming sites of caste discrimination and humiliation.
This report explores two research questions. First, what are the specific contours of caste-hate speech and abuse online? Semi-structured interviews with 12 scholars and activists belonging to Dalit, Bahujan and Adivasi (DBA) groups show that marginalised groups regularly face hate and harassment based on their caste. In addition to the overt hate, DBA individuals and groups are often targeted with abuse for availing reservations – a constitutionally mandated right. More covert forms of hate and abuse are also prevalent: trolls mix caste names and words from different languages together so that their comments appear meaningless to individuals who are not keenly aware of the local context.
Such hateful expression often emerges as a reaction from “upper-caste” groups to DBA resistance and social justice movements. Our respondents reported that the hateful expression can sometimes silence caste-marginalised groups and individuals, exclude them from conversations, and adversely impact their physical and mental well-being.
The second question we explore is how popular social media platforms and online spaces moderate caste-hate speech and abuse. We analysed the community guidelines, policies and transparency reports of Facebook, Twitter, YouTube and Clubhouse. We find that Facebook, Twitter and YouTube incorporated "caste" as a protected characteristic in their hate speech and harassment policies only in the last two or three years – many years after they entered Indian and South Asian markets – showing a disregard for the regional contexts of their users. Even after these policy changes, many platforms – whose forms for reporting harmful content list gender and race – still do not list caste.
Social media companies should radically increase their investment and capacity in understanding regional contexts and languages; they must focus on the dynamics of casteist hate and abuse. They will need to collaborate with a diverse set of DBA activists to ensure that their community guidelines effectively tackle overt, covert and hyper -local forms of caste-hate speech and abuse, and that their implementation and reporting processes match these policy commitments.
This research was developed by the Centre for Internet and Society with support from the project "Challenging hate narratives and violations of freedom of religion and expression online in Asia", implemented by the Association for Progressive Communications (APC) with funds from the European Instrument for Democracy and Human Rights (EIDHR).