Hateland Page 7
These analytics are one example of how social media is omnipresent, “even if you are not actively involved in social media. Today if you search for product, the results are dominated by user content and opinion—this shapes all online users’ opinions.”23
But this technology, while designed to increase user engagement, also meant that when Dallas cop killer Micah Johnson looked up police shootings of black males in his bedroom, he was automatically referred to similar stories or videos, much of which were more graphic, very emotional, or even essentially anti-cop propaganda. Likewise, the fundamental structure of social media would have guided Muhammad Abdulazeez and Dylann Roof to increasingly extreme material online. When certain people are repeatedly exposed to this material, their chances of becoming radicalized are elevated.
The analytics guiding a future extremist's online search are particularly dangerous because all people tend to consume content that confirms their particular viewpoint. This process of making sense of a nuanced, confusing world—one made even more complicated by the massive amount of information online—is known as “confirmation bias.” The analytics employed by Facebook, YouTube, and other services have automated and weaponized this tendency. They took a human cognitive tendency and encouraged it to become a bionic mania.
More than anything else, perhaps, social media and other online content strive to make every online activity, including radicalization, more convenient, fun, and addictive. Someone with a passing interest in Black Nationalism can read online extremist propaganda for days. A white supremacist with a smartphone can go to a chat room and discover a whole new community. A shy fourteen-year-old kid can join a conversation on the InfoWars website about how the Parkland School shooting was faked. Sovereign Citizens can urge each other on and teach their tradecraft through the comment stream of mainstream online publications.
Even video games are used for recruitment: Islamic extremists have posted a video with overdubbed music and Arabic exhortations on video from the violent first person shooter game Grand Theft Auto. White nationalists are using video game chatrooms to recruit and radicalize youth into their white supremacy worldviews.
The capacity of this technology to radicalize people remotely and below the radar was, until recently, virtually impossible. A 2013 study, which interviewed convicted terrorists about the role of the internet in their radicalization, found that the process is increasingly covert, “where individuals are not attending mosques to discuss radical views, but are instead turning to the Internet to find information in line with extreme beliefs”—including, for example, sharing beheading videos.24
One of the biggest mysteries surrounding the crimes committed by the three young men at the beginning of this book was how their attacks appeared to come out of nowhere. By providing access to extremist ideology, charismatic leaders, and online communities, the internet essentially created sleeper cells. Johnson's mom referred to the material he consumed on the internet as “poison,” but it was still hard for her, and many other people, to imagine that a communications device could play the same role in radicalization that living in a house full of skinheads used to. Nonetheless, social media functions as a kind of one-stop shop for radicalization, providing a sense of community and access to new ideological messages as well as “powerful video and imagery that appears to substantiate the extremists’ political claims.”25
The first high-profile evidence of social media's new capacity for radicalization came in April 2013 during the investigation of the Boston Marathon bombers, Dzhokhar and Tamerlan Tsarnaev. The two brothers were reportedly “motivated” by extremist Islamic beliefs but “not acting with known terrorist groups.”26 They were, however, well-versed in social media. In 2012, Tamerlan had created a YouTube channel promoting footage of Chechen jihadist leaders as well as a militant Islamic preacher.27 Further, the brothers had reportedly learned to build “explosive devices from Inspire, the online English magazine of the Qaeda affiliate in Yemen.”28
The bombing was probably the first successful domestic attack in which internet-based radicalization was a primary driver. It was a whole new threat, one that, according to the New York Times, “federal authorities have long feared: angry and alienated young men, apparently self-trained and unaffiliated with any particular terrorist group, able to use the Internet to learn their lethal craft.”29
But some questions remained as to whether it was that neatly wrapped. In 2012, Tamerlan had reportedly been visiting a radical Islamic Mosque while visiting his parents in Dagestan, Russia. A Boston-based man named “Misha” was reported to be another major offline catalyst.30 Finally, the bombers’ mother, along with Tamerlan, had been placed on a US watch list in 2011.31
Regardless of exactly what percentage of their radicalization and logistics were web based, the internet clearly played a huge role in the attack. But the foreign-born, Muslim brothers didn't represent the bigger threat to America. They had attracted at least some intelligence-community attention on their way to lasting infamy. Waiting in the wings were the much more dangerous native-born Americans who flew completely under the radar during their period of radicalization, right up until the moment of their attack. These were the pioneers of the extremist attacks carried out by Johnson, Abdulazeez, and Roof.
In December of 2013, Terry Lee Loewen was arrested for attempting to detonate a bomb at Wichita's Mid-Continent Airport. The fifty-eight-year-old, white Midwesterner, uniformly described by family and friends as “a good guy” and “normal,”32 was believed to be primarily motivated by the website Revolution Muslim and videos of Anwar al-Maliki. He had also been active on Facebook, advocating for violent jihad.
This threat was born of the social media that had developed since 2008. The radicalization still followed the same broad diagnostic model developed decades earlier. On a personal level, it generally began with people who had frayed safety nets. But social media's capacity for remote radicalization made it much faster, deeper, and harder to detect. Three decades after Louis Beam hooked up his white supremacist Bulletin Board System, the internet had finally developed into a remarkable force multiplier.
“Open the door. It's the police. We're here to rescue you.”
The group of people looked around the room where they had barricaded themselves. They had all fled the main hall during a concert in Paris on November 13, 2015, after ISIS-related extremists began firing into the crowd, eventually killing eighty-nine people. By show of hands, they decided not to open the door. The vote saved their lives. The man outside was one of the terrorists.33
What was remarkable aside from the attack's carnage, the worst in Paris since World War Two, was how well coordinated the attackers were. The assault began with an explosion at the Stade de France and continued south through multiple restaurants and cafes in the center of the city. By the end of night, 130 victims were dead and over 400 people were injured. Pro-ISIS groups began using the hashtag #Parisburns34 on their tweets. And intelligence agencies were left wondering how the group of seven terrorists managed to plan and coordinate the attack without being discovered.
By 2015, they had plenty of options: Orbot, RedPhone, iMessage, and WhatsApp, all of which allowed users to communicate with anyone in the world essentially freely and anonymously using encryption. These apps were another gift from social media to extremists everywhere. In fact, social networking had become so important to product engagement that all sorts of non-telecommunications electronics had embedded communication capacities. The day before the Paris attacks, Belgium's minister of the interior Jan Jambon had suggested that “PlayStation 4 is even more difficult to keep track of than WhatsApp”—quickly turning media attention to a gaming system that has sold over thirty million units worldwide.35
PS4 did indeed allow gamers to communicate via encrypted text and voice chat as well as create their own private chat rooms. But some law enforcement officials suggested that communication via PS4 and other gaming systems could be hidden even without encryption, t
hrough techniques that sounded like a digital version of Cold War-era espionage.36 For example, an operative could spell out an attack plan in Super Mario Maker's coins and share it privately with a contact. Or two Call of Duty players could write messages to each other on a wall in a disappearing spray of bullets.
While it turned out that PS4 was not part of the November 2015 Paris attacks, the units were, in fact, being used to communicate with terror groups. Earlier in 2015, an Austrian teenager used a PS4 to contact ISIS. The console reportedly had information that included bomb-making instructions.37
The gaming systems had also been used to evade the attention of American law enforcement. In 2017, the FBI searched a PS4 as part of an investigation into a child pornography ring.38 Later that year, the FBI compelled Sony to provide information from a PS4 user who was suspected of using the gaming console to communicate with a jihadist group.
In addition to radicalization and recruitment, social media offered a whole new range of tools to extremists preparing to take action. In August 1994, when Timothy McVeigh began collecting materials, he had to travel for hours, sometimes days, to buy explosives from gun collectors and fertilizer from farming co-ops. He borrowed a dictionary to look up the fuel “anhydrous hydrazine” before searching a telephone book for a company that would sell the volatile substance. He disguised himself as a motorcycle racer to buy the explosive fuel at a racetrack. He was turned down after asking an acquaintance for technical help on the bomb. In December 1994, he began on-site selection and surveillance on the Alfred P. Murrah building, four months before his eventual attack.
This was a tremendous amount of time-intensive work, all of which had to be accomplished without being detected. Access to Google Maps, YouTube, WhatsApp, and other modern social media would have completely changed McVeigh's timeframes.
The apotheosis of the internet's facilitation of remote radicalization and the wide availability of operational information are lone wolves: Johnson, Abdulazeez, Roof. Though McVeigh was the only one executed after the Oklahoma City bombing, he was not a true lone wolf. Three other people were arrested for their involvement in his plot: Terry Nichols, who was sentenced to life in prison; Michael Fortier, who received twelve years, and his wife, Lori, who was granted immunity in exchange for her testimony.
Since then, web-based resources have provided all sorts of valuable information about plots and, critically, for actors. None of this should come as a surprise, though. The internet doesn't just lower barriers to making a website or uploading a video or selling handmade knitwear; it reduces the logistical challenges of everything, including accessing terrorist tradecraft.
What's more, the internet is often heralded for encouraging and accelerating “online entrepreneurial activity.”39 This is exactly what we see in the negative mirror when we look at violent lone-wolf extremists. They are the ultimate self-starters, with a different focus. Instead of adding a credit card processing solution to their eCommerce site, they are using Google street view to create a complex targeting package. Thirty years after Louis Beam called for a “leaderless resistance” and a decade after Tom Metzger enjoined skinheads to “operate like a Nazi submarine,” the internet had made it easier than either could have imagined.40
The social media driven revolution between 2006 and 2008 turned remote radicalization into a widespread phenomenon. Social media provides a one-stop shop to plug in an aggrieved, adrift young man. While not every detail of Micah, Muhammad, or Dylann's radicalizations and attacks were facilitated by these forces, it's hard to imagine them carrying out the same attacks ten years previously—or in any era without pervasive exposure to the resourceful and addictive network of computers, phones, and tablets.
The “success” of social media was very different, however, from what early pioneers of online hate had imagined. Its impact wasn't the result of lowering the financial cost and logistical issues of spreading extremist rhetoric and ideas, but lowering the social cost associated with these beliefs. In 1995, online extremist forums already provided anonymous, virtual meeting places for neo-Nazis who felt like outcasts among their neighbors, co-workers, and peers at school. But it wasn't until after 2008 that sites like Facebook normalized the idea of spending a huge chunk of your social time online, meeting new people, frequenting interest groups, and developing identities. By 2016, another form of web-based activity would change online social dynamics again in a bizarre and, in some ways, more profound way.
In 2009, Time magazine named Federal Reserve Chairman Ben Bernanke its Person of the Year for his work tamping down the worst financial disaster in eight decades. That same year, someone named “moot” won the online poll for Time's Most Influential Person of the Year with nearly seventeen million votes, a particularly impressive showing for a recluse whose age Time listed as “unknown.”1
It turned out that moot was the pseudonym of Christopher Poole, a skinny twenty-one-year-old programmer who lived in his mom's apartment in a suburb of New York City. In February 2009, the Washington Post called him “the most influential and famous internet celebrity you've never heard of.”2 But though moot's accomplishment, founding 4chan, a website with roughly seven million users,3 was impressive, his victory was a fraud.
According to tech blog Music Machinery, the scam began when users of 4chan realized that moot's name had, improbably, ended up on Time's long list of candidates for Top 100 Most Influential People. Amused by the thought of their site's “overlord” winning the poll, several people designed autovoters, programs that essentially stuffed the online ballot box.
When the poll started, Time had lax online security, but after moot quickly went ahead by millions of votes, the magazine's tech staff reset moot's vote total and began requesting validation from voters. This crackdown merely enraged 4chan's technically adept users who retaliated by creating a special channel labeled #time_vote dedicated to dismantling Time's anti-hacking defenses.4
It was not the first time the results of Time's online poll had been suspect. The qualifications of the 1998 winner, wrestler Mark Foley AKA “Mankind,” had also been questioned by staff. But, in 2009, 4chan users went further than fixing the top spot, they elegantly gamed the top twenty-one finishers. After moot, came Anwar Ibrahim, a Malaysian member of parliament recently accused of sodomy. Rick Warren, the evangelical pastor, came in third.
By the time this list of unlikely figures ran from one to twenty-one, the first letter of each winner's name spelled out “marblecake also the game”—a reference to obscene in-jokes that had developed on 4chan. The ambitious yet seemingly meaningless prank said a lot about the underground hacker culture that developed around sites like 4chan, 8chan, and reddit. After marinating for years underground, the culture would explode into the mainstream beginning around 2008, changing social media in a manner that would benefit extremist ideology in unexpected ways.
Poole had launched 4chan in 2003, after Friendster and Myspace were up, but before Facebook, Twitter, and YouTube. Like those sites, 4chan was quickly adopted by young people to message, post images, and create their own communities, but that was where its similarities with traditional social media ended.
First, 4chan relied on what Poole described as “decade-old code and decade-or-two-old paradigm.”5 It didn't have Friendster's cute branding, or attractive graphics, or even attempt to update its format. Instead, 4chan was a functional, bare-bones image board site. People selected a board with their interests—anime, pornography, cannabis, politics, or dozens of others—and used text and images to jump into the conversation along with millions of other users. In terms of layout, it had more in common with the pre-web bulletin board systems used by Louis Beam and Tom Metzger than Facebook.
In fact, 4chan succeeded precisely because it was a blank slate, a stark platform filled with an enormous variety of the alternately juvenile, surreal, hilarious, and obscene user-created content known as memes. In 2009, 4chan was best known for creating “lolcats”—cute pictures of cats with bits of
humorous text, sometimes in broken English, which had become incredibly popular.6 Lolcats were reportedly born on “Caturdays,” weekends dedicated to posting pet pictures.
4chan was also known for pranks like “bait-and-switch” links that, when clicked on, sent the user to an unexpected location. The most famous of these pranks, called “rickrolling,” eventually fooled over eighteen million people into clicking on phantom links which sent them to the awkward video for Rick Astley's 1987 number one hit “Never Gonna Give You Up.”
Myspace's slogan was “A Place for Friends,” but if 4chan had a credo, it was “Make fun of everyone.” The site, populated at any minute by hundreds of thousands of, mainly, tech-savvy teenage and twenty-something males, was not nearly as cute and fuzzy as its mainstream manifestations suggested. For example, the punchline of the Time magazine hack, “marblecake also the game,” included in-joke references to scatological humor and an obscene but ridiculous sex act. It was also a joke that very few people would understand.
To 4chan users, that was the point. Anyone could join the site, but users were extremely territorial. This defensiveness—even more fervent due to the ultimate impossibility of policing borders on a public website—was clear in their slang. A common insult on 4chan was “normie,” meaning run-of-the-mill people. The term “basic bitch” was a derogatory term for consumerist mainstream women. “Chads and Stacys” were stereotypes of young men and women who were good-looking and athletic: one online graphic portrays them as a football player and cheerleader. 4chan users often considered themselves outsiders or nerds and were full of disdain, or resentment, for any form of mainstream “success.”
So, while 4chan users were after lulz, or laughs, they didn't care who they offended, or who laughed with them. Some users created the loveable lolcats; others were invading a cat lovers’ chat room with dead cat jokes. Users could also rally for more damaging large-scale lulzy action, like digital denial of service (DOS) attacks in which thousands of 4chan-ers blitzed a targeted site with so much traffic that its servers were overwhelmed and it crashed.