An underground network of activists fighting neo-Nazi propaganda in Colorado was instrumental in getting the racist group Identity Evropa bounced from major social-media platforms. But following a data leak by Unicorn Riot that firmly established its ties to neo-Nazi activities, including the August 2017 riot in Charlottesville, Virginia, where an anti-fascist demonstrator was killed, Identity Evropa rebranded as the American Identity Movement and launched fresh Twitter, Facebook and Instagram accounts under the new moniker.
Hours after Westword published a March 27 post highlighting Twitter's excuses for giving AIM a platform to engage in the same activities that had previously earned a ban — a rationale that a Colorado anti-fascist dubbed "a load of horseshit" — Facebook and sister service Instagram withdrew the group's privileges.
This move, made just over a week after the gunman in the New Zealand mosque massacre live-streamed the carnage on Facebook, contrasts sharply with Twitter's continued inaction; at this writing, the American Identity Movement continues to tweet out coded hate speech to thousands of followers despite regular complaints from the Colorado advocates. But it might not have happened without months of effort by the Lawyers' Committee for Civil Rights Under Law, a Washington, D.C.-based nonprofit that tore into Facebook for its irresponsibility toward organizations devoted to sowing racial discord in a September 2018 open letter and then worked with its power structure to craft a new, tougher policy toward them.
David Brody, counsel and senior fellow for the committee's privacy-and-technology wing, doesn't shower Mark Zuckerberg with praise for this action. In his words, "Facebook finally did what it should have done from the get-go." Moreover, he continues to fault various social-media services for the way their internal mechanisms actually reward organizations with vile agendas.
"Whether it's Twitter or Facebook or YouTube or other platforms, they all have algorithms that drive the content you see — what you see at the top of your feed versus the bottom of your feed," Brody points out. "What pops up is determined by decisions these companies have made about what you'll see first or second or last. And in the case of YouTube [which is owned by Google], there's a particular problem with how its recommendation engine drives users to increasingly extreme content. That has a real effect on radicalizing people who had not previously been exposed to white supremacist content, and it drives them further and further to these fringe beliefs and conspiracy theories. That's a tangible consequence of decisions these platforms make to maximize engagement instead of being concerned about their users' well-being."
According to Brody, the roots of the committee's interactions were planted in May 2018, "when there were news reports from Motherboard about how Facebook was treating white nationalism and white separatism differently from white supremacy" — something he sees as "a distinction without a difference."
Shortly thereafter, he continues, "we reached out to Facebook and began a dialogue with them that went all the way to the senior leadership of the company — and that included our letter in September, where we laid out why their policy was incorrect and why white nationalism and white separatism are white supremacy. They're just different names for the same thing. We backed that up with academic research and a list of experts they could consult with, as well as examples of white nationalist content that was active on their site and that was the equivalent of white supremacist content."
If you like this story, consider signing up for our email newsletters.
SHOW ME HOW
You have successfully signed up for your selected newsletter(s) - please keep an eye on your mailbox, we're movin' in!
Discussions with Facebook continued into 2019, culminating in "Standing Against Hate," the statement issued to explain the plug-pulling on the American Identity Movement and other similar outfits. It reads in part:
Our policies have long prohibited hateful treatment of people based on characteristics such as race, ethnicity or religion — and that has always included white supremacy. We didn’t originally apply the same rationale to expressions of white nationalism and white separatism because we were thinking about broader concepts of nationalism and separatism — things like American pride and Basque separatism, which are an important part of people’s identity.
But over the past three months our conversations with members of civil society and academics who are experts in race relations around the world have confirmed that white nationalism and white separatism cannot be meaningfully separated from white supremacy and organized hate groups. Our own review of hate figures and organizations — as defined by our Dangerous Individuals & Organizations policy — further revealed the overlap between white nationalism and white separatism and white supremacy. Going forward, while people will still be able to demonstrate pride in their ethnic heritage, we will not tolerate praise or support for white nationalism and white separatism.
Brody isn't ready to give Facebook and Instagram an unambiguous thumbs-up for this effort. For one thing, he has no idea how many groups like AIM have actually been dropped, because "Facebook doesn't provide a lot of transparency about how it's enforcing its rules, and it doesn't provide a lot of data about which rules are violated, how they're being violated, who is being affected by hateful activities on the platform or who's being targeted," he says. "We don't have any serious metrics to judge the efficacy of their enforcement, which is one of the big challenges here. We're a national civil-rights organization with some resources, but it's not like we have special access. We're looking at the same Facebook everybody else is. There's really no transparency to allow us or others to know what's going on under the hood."
In the meantime, Brody says, "we've been paying attention to Twitter, as well, and Twitter has at least as many problems with white supremacist content and other hateful activities on its platform, if not more. The structure of Twitter really enables a lot of direct harassment of people of color, women, religious minorities, LGBTQ individuals and others from marginalized communities. And even though Twitter is a smaller company than Facebook, they've done a worse job of trying to mitigate these issues — and they need to do a lot more. They created a platform that enables this type of activity. They hold the responsibility for cleaning up the mess they've created."