Facebook’s “Fight” Against Hate Speech
The Rohingya people have faced oppression from the Burmese government since the 1960s, when the military government pushed more than 200,000 Rohingya out of Myanmar and into Bangladesh. The Rohingya are an ethnic and religious minority in Myanmar. They are Muslims, while the majority of Burmese people are Buddhist. Although the Rohingya people have lived in Rakhine State (which is located on the west coast of Myanmar) for just as long as the Buddhists, they have long been marginalized and discriminated against by the majority. Now they are being forced out of their homes by the military’s violent actions, causing a massive refugee crisis. In 2018, the United Nations released a damning report, saying that “the Myanmar military should be investigated and prosecuted in an international criminal tribunal for genocide, crimes against humanity and war crimes.”
A common slur against Rohingya people is “Khoe Win Bengali”, which translates to “Bengali that sneaked in”. Rohingya people are often referred to as Bengalis in social media posts, fueling the idea that they are illegal immigrants that need to be purged. Hate speech against the Rohingya is a massive issue, especially on the country’s most popular social media platform: Facebook. Essentially everyone in Myanmar has Facebook, and for many people the word “Facebook” is synonymous to “the Internet”. Buddhist nationalist organizations and the Burmese military have conducted extensive propaganda campaigns against the Rohingya on Facebook. Most people get their news from their Facebook feeds, so their vitriol has had a magnified impact.
Hate speech against the Rohingya is a massive issue, especially on the country’s most popular social media platform: Facebook.
At first, Facebook did not pay much attention to the Rohingya crisis. However, in the face of mounting international criticism, they eventually decided to take action. In April 2018, Mark Zuckerburg testified before the US Senate, promising that Facebook would take a more active role in policing anti-Rohingya hate speech by hiring more Burmese speakers to monitor content. In August that same year, a Reuters Special Report found more than 1,000 examples of hate speech against Rohingya Muslims on Facebook. Clearly, the Facebook’s actions have not solved the problem.
In fact, there is evidence that Facebook’s policies have actually had the opposite effect. Mohammed Anwar, a Rohingya activist living in exile in Malaysia, told VICE News that Facebook’s censoring policies were preventing him from posting about crimes committed against the Rohingya by the military. Reportedly, dozens of activists have the same complaints – their accounts are suspended, and their posts are deleted or censored. If posts from activists are being censored, and numerous vitriolic anti-Rohingya posts are “slipping through the cracks” of Facebook’s monitoring system, what does that say about Facebook’s method for dealing with this issue?
Some might argue that Facebook simply needs to employ more Burmese content reviewers and do a better job of teaching them which posts should be removed and which should not be removed. However, banning all hate speech might not be the most effective strategy. As we have seen with mass shooters in the United States, who publish their manifestos online for the world to see, the censoring policies of individual companies will not stop hate speech. The users will simply find other platforms where they can spew their hatred. Suppression of hate speech by companies or governments can make it harder to find, but it has never made it disappear.
If posts from activists are being censored, and numerous vitriolic anti-Rohingya posts are “slipping through the cracks” of Facebook’s monitoring system, what does that say about Facebook’s method for dealing with this issue?
Social media companies do have a moral responsibility to prevent violence, but that does not involve attempting to remove all hate speech from their platforms. Removing all posts deemed “hate speech” does more harm than good. It is extremely difficult, and perhaps impossible, to create a blanket, objective policy that will remove all hate speech without negatively affecting counterspeech. If individual content reviewers are given the discretion to make subjective judgements about the “appropriateness” of content, then that would give those few people far too much power over public discourse. In addition, it would be impossible to expect social media companies to hire such a large number of content reviewers and give all of them comprehensive training about different types of hate speech. Given the deep and varied history behind different types of hate speech around the world, this method would be both impractical and expensive. Perhaps Facebook should adopt a different strategy. For example, YouTube does not does approach hate speech and misinformation by banning it. Whenever users search for misinformed content, the YouTube algorithm will also pull up posts and videos that debunk those arguments. While no information about the effectiveness of this method is readily available online, it is an idea worth considering.
Under the First Amendment, private companies such as Facebook’s content cannot be regulated by the government. Faced with international criticism, Facebook has been forced to deal with the crisis in Myanmar, but they have clearly failed miserably at their goal. It is time for them to consider an alternative approach.
Photo provided by Flickr, courtesy of DFID – UK Department for International Development under the Creative Commons License