On Monday, Facebook eliminated many extremist pages from its platform, including several Canadian pages, such as those of extremist groups Soldiers of Odin, the Canadian Nationalist Front and Aryan Strikeforce, as well as individuals like white supremacists Faith Goldy and Kevin Goudreau.
Some anti-racist activists say it’s a good start, but only the tip of the iceberg. They also assert that occasional purges of hate content will not address the larger issue in the absence of clear, enforceable standards by social media giants like Twitter and Facebook, which also owns Instagram.
Relatedly, the Centre for Israel and Jewish Affairs has launched a campaign, #notonmyfeed, which is intended, according to the accompanying website, “to stop online hate from becoming real-world violence.” (See jewishindependent.ca/cija-fights-online-hate.) CIJA cited social media posts by the murderers in the Pittsburgh synagogue and the Christchurch mosque killings as cause for governments to move on the issue.
“In both cases, the perpetrator used social media to spread their heinous, hateful agenda,” according to the website notonmyfeed.ca. “From white supremacists to ISIS, it is increasingly clear that online hate and radicalization can fuel and foreshadow offline violence.”
The House of Commons justice committee – as if they are not busy enough with the SNC-Lavalin affair – is launching a study on the issue. The intent, according to CIJA, is to develop a national strategy around online hate.
A national strategy confronting hatred, whether online or offline, seems like a positive development if it helps track problematic people and ideas in order to prevent future violence.
The benefits of a crackdown on online hatred are obvious: by making it more difficult for hateful ideas to reach large, mainstream audiences, moves like those by Facebook are a positive step. Groups that use social media to recruit individuals into hate movements may be hobbled by such policies. Although there are plenty of forums online where they can continue their efforts, hate groups may not have as easy and accessible a reach if policies are put in place to monitor and censor such groups and their messages.
Of course, some of the extremists are crowing about being banned.
“Our enemies are weak and terrified,” Goldy tweeted (because she is not banned from that platform). “They forget most revolutions were waged before social media!”
True enough. But if we make Goldy’s job harder, it’s a good thing.
However, while there are potential positive outcomes, we should not be blind to the potential unintended consequences of such a move.
If the murderers of Pittsburgh and Christchurch had given hints on social media of their intent, isn’t the larger issue here that those threats went unchecked and, therefore, the perpetrators were allowed to complete their mission of mass murder without intervention? Do we really want to eliminate forums in which we can track and identify potential terrorists? If we ban them from these platforms, are we forcing them underground into places where we cannot police them?
Presumably, police and intelligence agencies know where to find the online warrens of hatemongers and can monitor those venues almost as easily as they could Facebook or Twitter, while ensuring that members of the public who are innocently surfing the web do not stumble upon violent hate messaging in seemingly innocuous places. Even so, given that, as CIJA points out, the Pittsburgh and Christchurch killers left a trail on social media and still managed to execute their terrible plans, it suggests we’re not doing a stellar job on this front even when the warning signs are on the world’s largest sharing platforms. So, how much better are we to expect things to be when we force them into the darker crevasses of the online world?
This issue is confounding in part because the internet is, by definition, anarchic and largely beyond the control of all but the most authoritarian governments. As a result, governments and even social media behemoths like Facebook can do only so much to control what is shared through the web writ large.
Leaving aside issues of free expression (which differ across jurisdictions in ways that social media do not), there are practical considerations that we hope elected officials, law enforcement and social media corporations themselves consider when addressing online hate.
As governments do begin to take the issue seriously and consider interventions in the interest of public safety (including, especially, the safety of the most commonly targeted identifiable groups), we trust that a balance will be struck between eradicating violently hateful messaging, on the one hand, and, on the other, not harming law enforcement’s ability to do their job by pushing these ideas into clandestine sectors where they can neither be monitored nor challenged. Finding that balance should be the key to formulating public policy on this urgent issue.