De-Platforming Works
With Ye’s banning from Instagram and Twitter, the polarizing debate around “de-platforming” — the banning of users from social media platforms — has been revitalized. The artist formerly known as Kanye West was suspended from Instagram after captioning photos of comedian Trevor Noah with racial slurs. He then moved to Twitter, where he was banned only two days later for a tweet that read “when I wake up I’m going death con 3 on JEWISH PEOPLE.”
Ye is only one in a growing list of prominent figures, such as Alex Jones and Andrew Tate, that have been restricted for hateful and extremist messaging. Social media companies are no strangers to problematic content or the controversy surrounding its removal.
Social media platforms are proven to be the most valuable tool for extremist groups across the world: 90% of all terrorist activity online happens through social networking tools according to The Alliance to Counter Crime Online (ACCO). Further, 50% of interviewees said “internet-based recruitment and perusal of social media factored into their decision to join ISIS,” according to the International Center for the Study of Violent Extremism. And an internal study at Facebook itself found that 64% of all extremist group joins were from their carefully-tuned recommendation tools.
Yet, publicly, Facebook claims 99% of extremist content is taken down before it ever gets published — the true number is 38% according to the Counter Extremism Project. There’s clearly a problem with hateful content online. So, what is the solution?
One option is censorship, or de-platforming. The question of censorship is not a new one, and, like any other moral discussion, it will likely never be solved. But in the meantime, we have data. And the data shows that de-platforming works.
The deletion of subreddits hosting far-right content like r/The_Donald and r/Incels led to a decrease in hate speech across the entire website and reduced nearly all the original users’ activity according to a 2021 Swiss university report. Another study had similar findings with extremist Facebook groups, and a company called Zignal Labs found that misinformation across several big social sites dropped by 73% after Donald Trump’s social media bans after January 6. At the same time, “[i]t’s hard to attribute all of that difference to just that one suspension because 70,000 other accounts were also taken out of the system,” says Kate Starbird, a professor at the University of Washington.
Social media companies aren’t the only ones who can de-platform: After Parler was dropped by Amazon Web Services in 2021, they approached six different hosting services and were denied each time. Another alt-tech alternative called Gab was removed from app stores and, after the Tree of Life massacre, lost the support of “cloud infrastructure providers, domain name providers and other Web-related services.” Gab’s radicalizing content was key to the shooting according to the Huffington Post, and the shooter was a prolific and verified user. These sites have begun using decentralized, open-source, and censor-avoiding technologies to remain online according to Jeremy Blackburn, assistant professor of computer science at Binghamton University. But, even so, they continue to face financial and technical hardship after being blacklisted.
A common criticism of de-platforming is that it creates echo chambers, further radicalizing those who are banned. But we don’t know that for certain; the unintended effects of de-platforming in the future, near and far, remain unclear, according to non-profit organization Data and Society. Censorship slows the growth of extremist audiences, and, in many cases, even reduces them. But experts fear that de-platformed figures who migrate to new platforms can trigger an influx of new users.
De-platforming must be “transparent and democratic,” according to Robert Gehl, associate professor of communication and media studies at Louisiana Tech University. He points to the alternative social media site Mastodon as an example; Mastodon is a federated platform with a democratized system of muting, block, and banning. This federated system has prevented not just hate speech and extremism, but spam, harassment, advertising, and scamming as well. “De-platforming not only works,” he says, “but I believe it needs to be built into the system itself.”
Unlike alternatives like Mastodon, mainstream social media companies like Meta, which owns Instagram and Facebook, are beholden to shareholders and, as such, are only rewarded for turning a profit. Increased content moderation is expensive and time-consuming, and social media platforms can host illegal activity with complete immunity in the U.S. under the 1996 Communications Decency Act. This means allowing and even promoting extremist content up until advertisers begin to pull funding, walking a careful tightrope between user engagement and public outrage.
As Gehl explains, “It’s not just cute cats that draw attention but conspiracy theories, misinformation and the stoking of bigotry.” When it comes to social media, impressions and engagement reign supreme. Corporations tolerate and will continue to tolerate hate speech, extremism, and violence if they’re profitable. And de-platforming shows that there are urgent and important reasons to demand accountability.
Image courtesy of Progetto Prometeo under the Creative Commons