On Tuesday the US House judiciary committee will hold a hearing on hate crimes and white nationalism, a response to the increase in the number of hate crimes since Donald Trump was elected. Jonathan Albright writes that while hate has always existed, the modern internet has made it much more visible and accessible, which in turn has made its spread easier. He argues that the current global propagation of hate involves international cells of nationalist cooperation which play on eight categories to recruit into its subculture.
For years, across social platforms, app ecosystems, and the rest of the web, I’ve studied the intricate networking and organizational amplification of outrage. The grievous acts that follow in the wake of progressively extreme ideologies are the conspicuous product of globally connected cells, misguided prejudices, and attention amplification mechanisms.
Sharing on the internet has undoubtedly played a part in the spread of extremism, often with tragic consequences. Yet, hate, and hate-based group affiliation, has always existed in society. Before the modern internet, hate was just less visible, its memberships less accessible, and its viewpoints less documented in society-at-large.
Figure 1 – Algorithmically generated “topic” channel for banned account on YouTube. Found August 2018
Without a doubt, our over-reliance on discovery and recommendation algorithms, as well as the ubiquity of social apps and networking platforms — a market controlled almost exclusively by a handful of American-based multinational companies— have helped fuel the rise of hateful ideologies.
At the same time, we must recognize the other contributing factors, such as the decades of careless media attention to the growing flurry of hate-related incidents and acts of horrific violence. Furthermore, especially for Americans, setting aside the legal conundrums involving Section 230 and the First Amendment, we must recognize the relative absence of aggressive, future-focused policymaking that has facilitated the unmitigated spread of hate with few consequences. This is an oversight that spans well beyond the current executive administration and legislative branch.
The Hateful Eight
Hate, in action form, is exceptionally hard to tackle, because the structures that enable and embolden it on a global scale — online and offline — arise through simultaneous, overlapping acts of conflict and cooperation. I’ll call these cultural flashpoints.
Not unlike terrorism, the global propagation of hate involves international cells of nationalist cooperation. Hate therefore spreads through a form of inter-nationalism. As with other subculture affiliations, the steady flow of recruits who are wittingly and unwittingly indoctrinated into hateful ideologies arrive through their expressions of shared values, and at the same time, feelings of mutual exclusion.
The recruitment process of hate — the marketing department — operates across many fronts; I’ll call them informational vectors. Reiterating that none of the following categories are things that will ever be limited to social media or the internet, they include:
Conspiracies; Religion; Race; Geopolitics; Gender; Political Party; Toxic Humor; and Curation
We can debate the harms caused by examples found online from these categories from now through eternity, but it must be recognized that hate — through messaging, signaling, and imagery— involves the transmission of feelings as much as the sharing of “bad” content.
Underneath the categories above are messages that reinforce ideas about marginalization, fear, distrust, and exclusion. These are not posts in the traditional sense, but rather signals that are easily diffused across a vast expanse of ideological outlooks, social settings, and geographic locales.
Despite the American news media’s and legislators’ tendency to fixate on what bad things end up on Facebook, Twitter, Instagram, and YouTube, there are numerous less visible, even invisible spaces that are every way equals in terms of their impact. And, of course, there are parallel physical spaces that are no less problematic than the worst of the online spaces.
Shutting down a few accounts, YouTube channels, or banning posts on a website after hate narratives go too far — which in the United States typically means when hate acts turn into violent criminal acts— are in essence a stop gap measure. As tends to happen time and time again, new spaces — more entrenched, less connected, and with more extreme memberships, appear to take their place.
“671.1” by Michael Hogan is licensed under CC-BY-NC-SA-2.0
In this way, whack-a-mole takedown efforts are a PR battle won in a never-ending war against chaos. Is this the same as defeat? I’m not entirely sure. Still, hate is ambient — a plight that spans the realms of threaded comments, hashtags, meme communities, game culture, and video feeds; it’s not limited to class, religion, or region, nor is hate bound to social media, messaging apps, and online forums.
Hate is all of these things fused together — affect layered on top of identity and networked culture, and expressed through intra-group affiliations. Hate is a condition.
Without a doubt, the expression of hate has become more fashionable. Today, it is legitimized by authoritative figures and should-be role models in statements, promoted in campaign slogans, and strategically funneled out through headlines, sound bites, and expertly organized meme blasts — all calculated moves meant to secure their standing in society. Hate is being displayed—many would say normalized—in ways that haven’t been seen for decades.
Dare I say expressing hate is…trendy? As a researcher whose work frequently overlaps with the subject, sometimes these acts of hate — crimes, almost-crimes, or non-crimes — can be traced to specific origins, and made clear that hate campaigns and targets have been coordinated. In these scenarios, it’s easier to assign blame for the consequences, a result that comes with its own unique set of challenges. More often, however, hateful acts surface more randomly.
Through shared feelings, hate is a chaos force multipler. In its wake, we see the rapid fragmentation of social institutions, established communities, and the dissolution of coherent discourse. Hate sidelines what should be social progress, partitioning it into never-ending clusters of conflict and what I like to call outrage porn.
If hate were part of physics, it would be pure entropy creeping into our civic and political processes, rendering them unstable and untrustworthy. This entropic effect helps form the ill-logical basis for many hate acts: catalyzing feelings — namely the perceived loss of control that nudges individuals and groups to take matters into their own hands — to fix the perceived ills in their world on their own terms.
How do you fight the spread of chaos?
We must expand our definition of what hate actually represents: First and foremost, as I’ve argued here, hate is a feeling — a condition. These feelings precede actions, and we must further recognize that only a tiny fraction of hate-related acts can be categorized as crimes. And if we wait until these acts become crimes, it’s already too late.
While hate has an ideological basis, these ideologies all involve sets of shared values, embedded within specific cultural and social contexts. It’s no wonder, then, that “random” acts of hate seem to increase in frequency and intensity as established cultural and social norms undergo change.
Third, as the diffusion of hate is inter-nationalist by nature, it’s not confined to race (e.g., “white nationalists”) or class, nor is it exclusive to the “online” or “offline” realms. Like in how we curb foreign terrorism, we need to create broad sets of obstacles and clear disincentives that make the recruitment of hate-related ideologies harder and extremist group affiliations less accessible.
We can’t make hate disappear, but we can dramatically slow down the entrepreneurial path of radicalization. Literacy and educational initiatives are fine, but these require extraordinary cooperation and consensus about funding and outcome goals. We can slow the spread of hate through laws, but laws require considerable forethought, not to mention hawkish enforcement and oversight.
Most importantly, fighting hate demands unprecedented cooperation — between business, government, and public sectors, and across agencies, regimes, and countries. Smart policymaking, international cooperation, and proactive private sector buy-in — in this case sustained technological and personnel investment from American technology companies with supervision from government and state agencies is the necessary first step towards reigning this chaos in. Not the least because hate thrives in environments that lack well-defined rules.
I’ll also double down on making the less popular case that media attention, occasionally bordering on obsession, reporting on hate incidents — including horrific acts of racial and ethnic violence and domestic terrorism—only helps to amplify the theater of hate propaganda, promoting opportunistic recruitment into extremist ideas and cultural xenophobia.
The tragedy in Christchurch, New Zealand illustrates again that in dealing with hate, we face an existential global problem. The events that follow in hate’s wake will never be limited to criminal acts and publicized events in North America, Europe, and Asia. Thus the response by the New Zealand government, and the skillful handling of the massacre by the country’s prime minister Jacinda Ardern should offer policymakers an example of best practices in responding to hate—particularly as a framework in reducing the residual publicity that the event aftermaths receive.
American policymakers should not limit their initial approach to investigating, and presumably, attempting to regulate the spread of hate to acts that can be officially categorized as crimes. Because it makes playing the whack-a-mole game that much harder.
Approaching the fight against hate as a condition rather than acts that “cross the line” into definitive criminality means it is larger in scope. But it also means that the real battle is much better defined. I see this as another prerequisite to combatting the full extent of the problem. The polarization gateway into crimes — in many cases targeted sectarian violence — almost always involves a series of almost-crimes, and before that, a slippery slope of publicly expressed, mutually reinforced feelings.
While it’s fair to stigmatize “AI” — meaning algorithms and machine learning — as a contributing factor, we need sharp criticism of reckless media reporting on hate—whether crimes, almost-crimes, or non-crimes. When journalists and news outlets treat acts of hate like the news-of-the-day, as scholar danah boyd and others have pointed out, they provide the underlying ideologies with the legitimacy and social status they crave. In my view, this has the direct effect of turning cultural extremism into popular culture.
Mitigating the havoc that hate will continue to wreak on our society means taking away the provocative glamour and oppositional outrage its proponents gain with minimal effort through adversarial media attention. This strategy has less to do with censorship, a response which I have personally found to result in even more publicity, but through smarter news coverage and editorial decisions. For tech companies, it means curbing free promotion, monetized or otherwise.
All of this requires coming to stark terms with the notion that hate—and hate-related acts—are not about crimes, or even nationalism, but rather are part of a condition that leads people and groups down a path towards unacceptable behaviors. By expanding the definition of hate in the ways I’ve discussed here, American policymakers can move forward with more effective long-term strategies to inhibit its global spread.
- House Judiciary to Hearing on Hate Crimes & White Nationalism – 9 April
- New Australian law will mean penalties for Facebook and Twitter for hosting violent content – 5 April
- UK government publishes Online Harms White Paper – 8 April
- A version of this article originally appeared on Medium.
Note: This article gives the views of the author, and not the position of USAPP– American Politics and Policy, nor of the London School of Economics.
Shortened URL for this post: http://bit.ly/2WRizYZ
About the author
Jonathan Albright – Columbia University
Jonathan Albright is the Director of the Digital Forensics Initiative at the Tow Center for Digital Journalism. Previously an assistant professor of media analytics in the school of communication at Elon University, Dr. Albright’s work focuses on the analysis of socially-mediated news events, misinformation/propaganda, and trending topics, applying mixed-methods, investigative data-driven storytelling approach. His work has been featured on Medium, The Huffington Post, The Conversation, and the LSE Impact Blog. He can be found on Twitter @d1gi