Here’s how to kick nazis off your Twitter right now
While you wait for Twitter to roll out “more aggressive” rules regarding hate speech, which CEO Jack Dorsey promised are coming within “weeks” as of late Friday, here’s a quick workaround to kick nazis off of your Twitter feed right now: Go to the ‘Settings and privacy’ page and under the ‘Content’ section set the country to Germany (or France).
This switches on Twitter’s per country nazi-blocking filter which the company built, all the way back in 2012, to comply with specific European hate speech laws that prohibit pro-Nazi content because, y’know, World War II.
Switching the country in your Twitter settings doesn’t change the language, just the legal jurisdiction. So, basically, you get the same Twitter experience, just without so many of the Swastika wielding nazis.
In Germany incitement to hatred is deemed a criminal offense against public order, and nazi imagery, expressions of antisemitism, Holocaust denial and so on are effectively banned in the country.
Free speech is protected in the German constitution but the line is drawn at outlawed speech — which, as programmer and blogger Kevin Marks has noted, is actually a result of the post-war political settlement applied by the triumphant allied forces — led by, er, the U.S…
In a further irony, Twitter’s nazi blocking filter gained viral attention on Twitter last week when a Twitter user creatively couched it: “One Weird Trick to get nazi imagery off Twitter”. At the time of writing her tweet has racked up 16,000 likes and 6.6k retweets:
Dorsey’s pledge of effective action against hate tweets followed yet another storm of criticism about how Twitter continues to enable harassment and abuse via its platform. Which in turn led to a spontaneous 24 hour boycott on Friday. Just before Dorsey tweet stormed to say the company would be rolling out new rules around “unwanted sexual advances, non-consensual nudity, hate symbols, violent groups, and tweets that glorifies violence”.
(i.e. the stuff women and other victims of online harassment have been telling Twitter to do for years and years.)
Yet in 2012, when Twitter announced the rollout of per country content blocking, it was absolutely sticking to its free speech guns that the “tweets still must flow” — i.e. even nazi hate speech tweets, just in all other markets where this kind of hateful content is not literally illegal.
Indeed, Twitter said then that its rational for developing per country blocking was to minimize the strictures on free speech across its entire platform. Meaning that censured content (such as nazi hate tweets) would only be blocked for the smallest possible number of Twitter users.
“Starting today, we give ourselves the ability to reactively withhold content from users in a specific country — while keeping it available in the rest of the world. We have also built in a way to communicate transparently to users when content is withheld, and why,” the company wrote in 2012, saying also that it would “evaluate each request [to withhold content] before taking any action”.
So Twitter’s nazi filter was certainly not designed to be pro-active about blocking hate speech — but merely to react to specific, verified legal complaints.
“One of our core values as a company is to defend and respect each user’s voice. We try to keep content up wherever and whenever we can, and we will be transparent with users when we can’t. The Tweets must continue to flow,” it wrote then.
“We’ve been working to reduce the scope of withholding, while increasing transparency, for a while,” it went on to say, explaining the timing of the move. “We have users all over the world and wanted to find a way to deal with requests in the least restrictive way.”
More than five years on from Twitter’s restated conviction that “tweets still must flow”, tech platforms are increasingly under attack for failing to take responsibility for pro-actively moderating content on their platforms across a wide range of issues, from abuse and hate speech; to extremist propaganda and other illegal content; to economically incentivized misinformation; to politically incentivized disinformation.
It’s fair to say that the political climate around online content has shifted as the usage and power of the platforms have grown, and as they have displaced and eroded the position of traditional media.
To the point where a phrase like “the tweets must flow” now carries the unmistakable whiff of effluent. Because social media is in the spotlight as a feeder of anti-social, anti-civic impacts, and public opinion about the societal benefits of these platforms appears to be skewing towards the negative.
So perhaps Twitter’s management really has finally arrived at the realization that if, as a content distribution platform, you allow hateful ideas to go unchallenged on your platform then your platform will become synonymous with the hateful content it is distributing — and will be perceived, by large swathes of your user-base, as a hateful place to be exactly because you are allowing and enabling abuse to take place under the banner of an ill-thought-through notion that the “tweets must flow”.
Yesterday Dorsey claimed Twitter has been working on trying to “counteract” the problem of voices of abuse victims being silenced on its platform for (he said) the past two years. So presumably that dates from about the time former CEO Dick Costolo sent that memo — admitting Twitter ‘sucks at dealing with abuse’.
Although that was actually February 2015. Ergo, more than two years ago. So the question of why it’s taken Twitter so very long to figure out that enabling abuse also really sucks as a business strategy is still in need of a definitive answer.
“We prioritized this in 2016. We updated our policies and increased the size of our teams. It wasn’t enough,” Dorsey tweeted on Friday. “In 2017 we made it our top priority and made a lot of progress.
“Today we saw voices silencing themselves and voices speaking out because we’re still not doing enough.”
He did not offer any deeper, structural explanation of why Twitter might be failing at dealing with abuse. Rather he seems to be saying Twitter just hasn’t yet found the right ‘volume setting’ to give to the voices of victims of abuse — i.e. to fix the problem of their voices being drowned out by online abuse.
Which would basically be the ‘treat bad speech with more speech’ argument that really only makes sense if you’re already speaking from a position of privilege and/or power.
When in fact the key point that Twitter needs to grasp is that hate speech itself suppresses free speech. And that victims of abuse shouldn’t have to spend their time and energy trying to shout down their abusers. Indeed, they just won’t. They’ve leave your platform because it’s turned into a hateful place.
In a response to Dorsey’s tweet storm, Twitter user Eric Markowitz also pointed out that by providing verification status to prominent nazis Twitter is effectively validating their hate speech — going on on to suggest the company could “fairly simply develop better criteria around verifying people who espouse hate and genocide”.
Dorsey responded that: “We’re reconsidering our verification policies. Not as high a priority as enforcement, but it’s up there.”
“Enforcing according to our rules comes first. Will get to it as soon as we can, but we have limited resources and need to strictly prioritize,” he added.
At this point — with phrases like “limited resources” being dropped — I’d say you shouldn’t get your hopes up of a root and branch reformation of Twitter’s policy towards purveyors of hate. It’s entirely possible the company is just going to end up offering yet another set of ineffective anti-troll tools.
Thing is, having invited the hate-filled voices in, and allowed so many trolls to feel privileged to speak out, Twitter is faced with a philosophical U-turn in extricating its product from the unpleasantness its platform has become synonymous with.
And really, given its terrible extant record on dealing with abuse, it’s not at all clear whether the current management team is capable of the paradigm shift in perspective needed to tackle hate speech. Or whether we’ll just get another fudge and fiddle focused on preserving a definition of free speech that has, for so long, allowed hateful tweets to flow over and drown out other speech.
As I wrote this week, Twitter’s abuse problem is absolutely a failure of leadership. And we’re still seeing only on-the-back-foot responses from the CEO when users point out long standing, structural problems with its approach.
This doesn’t bode well for Twitter being able to fix a crisis of its own flawed conviction.