Social media platforms can act as a very effective medium for activism. Familiar examples include the #MeToo movement and the ice bucket challenge. More recently, the protests against police brutality and racial injustice following George Floyd’s death last year were also fueled by public support of the Black Lives Matter movement on social media all around the world. Even back in 2010, during the Arab Spring, the internet and social media were used to protest against autocratic regimes. Social media has played a crucial role in publicizing political and social messages to a global audience in a matter of seconds, acting as a catalyst for activism and sociopolitical movements. However, social media has also become a medium for pervasive hate speech, which has dire consequences on our daily lives, society, and even politics.

Hate speech can be fatal. The controversy over Everytime is a perfect example of how online hate speech has the power to inflict psychological trauma, physical damage, and even lead to an unfortunate death. Everytime is an online community platform for university students in Korea, used by over 400,000 students from 396 different campuses. Last year, a student uploaded several posts about her depression and mental state but received extremely hateful comments, such as “die quietly,” or “you only say you want to die, but you don’t.” She was later found to have committed suicide and left a will calling for punishment for the users who wrote such brutal comments, as reported by JoongAng Ilbo. This incident demonstrates how the powerful influence of online hate speech is underestimated, as well as how there is a lack of regulations and guidelines in preventing hate speech in the first place. Though people recognize the ubiquity of the internet and its widespread influence, most tend to trivialize aggressive conversations in the comment section or posts containing inappropriate language. We still fail to fully grasp the fact that hate speech on social media is just as violent and harmful as physical threats or insults from face-to-face interactions.

Another major concern is that hate speech tends to be used against minorities, which often reinforces stereotypes about socially vulnerable groups. A few years ago, Korean slangs, mostly puns, associating certain groups of people with the suffix “-choong” (which translates to bugs in Korean) became widely used on the Internet. The main groups targeted for these “jokes” were gays, children, teenagers, moms, the elderly, and Southeast Asians. While some criticized the unjust meaning behind these puns, the active users of these words claimed they had no bad intentions and dismissed the criticisms as being overly sensitive. This disparity is due to social inequality and is directly associated with the freedom people have to disparage social minorities for the sake of humor. The use of these puns sets an implicit consensus that these derogatory remarks can be lightheartedly dismissed, and as a result, stigmas and prejudiced attitudes towards the targeted groups are strengthened. As these kinds of slangs become mainstream and spread rapidly throughout the Internet, discrimination is further amplified. A bigger issue lies in the fact that teenagers who constitute a major group of social media users are exposed to hate speech on a regular basis, and will unconsciously internalize generalizations and distorted views of minority groups.

How can hate speech be prevented? The first step is to make the public realize the severity of online hate speech and spread awareness that criminalizing hate speech does not violate the right to freedom of speech. Especially for teenagers at school, online ethics education seems ever more necessary. Revisiting the incident on Everytime, the app’s policy guaranteeing anonymity of all users was heavily criticized and held accountable for the suicide. However, besides the problem with anonymity, users lacked awareness of what their comments could lead up to. Platforms must take an ethical responsibility to deprioritize their commercial profits and draw more attention to the harm they can cause. They must actively strengthen regulations and guidelines to penalize the production and dissemination of content promoting hatred or violence, which will make users more cautious when they’re posting, chatting, or writing comments.

COVID-19 has raised the consumption of social media and greatly extended our time spent online, increasing online interactions between individuals. All of our daily activities — from working out to taking part in sociopolitical protests — have transferred to digital platforms. It is a timely opportunity to reflect on how we can provide a hate-free and prejudice-free online space for everyone.

Copyright © The KAIST Herald Unauthorized reproduction, redistribution prohibited