Over the past decade the combination of smartphones and social-media platforms like Twitter, Facebook, and YouTube have changed the media landscape. Our networks for gathering and sharing information have become more specialized and more accessible at the same time — and, as we learned from Russia’s interference in the 2016 presidential contest, more vulnerable to being exploited. Information spreads faster than ever these days, and when that information is false or twisted, society — and democracy — suffers.

Whitney Phillips, an assistant professor of communication and rhetorical studies at Syracuse University, has been studying disinformation and media manipulation for more than a decade. In the late 2000s, as a PhD candidate in the English department’s folklore program at the University of Oregon, she began researching the emerging subculture of “trolling” on the website 4chan. This fringe community was creating its own way of communicating by combining found images with sometimes absurd slogans to make memes: jokes or ideas that can be easily shared and quickly read on the Internet. It was a fascinating new frontier, but as time went on, meme culture became increasingly weaponized by neo-Nazis, white supremacists, and others who used irony to smuggle bigotry into the mainstream. “I was just trolling!” became a catchall phrase to excuse such behavior.

These efforts to sow discord and hatred took advantage of — and, Phillips argues, were even incentivized by — the libertarian ethos of Facebook and Twitter. The result was to pump poison into what Phillips calls our “media ecosystem,” a vast realm that encompasses not only news organizations like The New York Times and CNN but also the seemingly infinite world of YouTube videos; the perpetual scroll of social-media feeds; and the alarmist, right-wing pundits of Fox News or Breitbart. What all these branches of the ecosystem share is a desire to grab our attention and use it to make a profit. This “attention economy,” as it’s come to be called, is reliant upon our reading, clicking, liking, and sharing information, regardless of whether or not that information is true.

Phillips’s book You Are Here, written with coauthor Ryan M. Milner, was published in March in an online-only edition. (It will get a hardcover release in 2021.) It’s peppered with ecological metaphors that compare media networks to roots in a forest. Phillips spent several years at Northern California’s Humboldt State University, first as an undergraduate and later as a lecturer, and she would routinely run through the redwood forests there. “The root systems are so densely interconnected,” she told me, “that it’s really hard to distinguish where one tree ends and another tree begins.” Trees can share nutrients through their roots. “If toxicity enters one side of the forest, it ends up on the other side.”

It’s an apt metaphor for the spread of polluted information, including conspiracy theories, which Phillips and Milner explore at great length, from the Satanic Panic of the 1980s to the current QAnon movement, which posits that leaders of the Democratic Party run a child-sex ring and worship the devil. (There are a handful of QAnon believers running for elected office in November, and many more who have publicly signaled interest in the theory.)

A chapter of You Are Here titled “The Gathering Storm” outlines the elements that have brought us this deluge of conspiracy theories: entrenched evangelical Christian beliefs that have fostered a willingness to demonize the “other”; media companies so desperate for our attention that they employ complex algorithms to show us more of what they think we want; and the ease with which polluted information can spread from person to person. In Phillips’s view, we are facing a storm that threatens the concept of democracy, and most of us don’t even know we’re contributing to it by sharing the latest viral video. “So much of the pollution that is loosed across the landscape is unintentional,” she says.

I spoke with Phillips several times over the phone this past summer as the U.S. pandemic death toll climbed to the highest in the world and wearing a mask in public became a politically divisive issue. She was preparing to teach two sections of a class called “The 2020 Election” this fall. When she’d come up with the idea in 2019, she hadn’t counted on a global pandemic, an economic meltdown, and a nation violently reckoning with its history of racism.

 

539-whitneyphillips

WHITNEY PHILLIPS

© Jeffrey Tolbert

Cohen: Why, throughout this country’s history, have people been drawn to conspiracy theories, from the Salem witch trials to QAnon?

Phillips: There are two major conspiracy-theory templates in the United States, which historian Kathryn Olmsted lays out in her excellent book Real Enemies. One is the “subversion myth”: the idea that some unwelcome other — usually immigrants, a demonized “they” — is threatening “our” way of life. Everything “we” hold dear is in danger from “them.” That template has been used to justify violence against a whole host of immigrants and non-Christians.

In the twentieth century another template appeared: the idea that the government itself is the nefarious “them” that is out to get “us.” When people on the Right promote this type of theory, the argument is that the government is trying to take away our freedoms. In the sixties, for example, a lot of folks in the South were pissed off about federal civil-rights laws. There was a sense that the government was trying to take away “our” way of life — “our” meaning white people’s. These theories cast the government as leftist or communist, characterizations that are often implicitly or explicitly anti-Semitic.

On the Left you find a similar suspicion of the government, only the government is seen as fascistic and right-leaning, particularly because of imperialist engagement in expansionist wars. So the same template can generate very different conspiracy theories.

It’s difficult to say why specific people believe in conspiracies; there’s no one psychological profile that makes a person more likely to be taken in by them. But we know this idea that “we” are under threat from “them” is really motivating for people, because it makes clear who the good guys and the bad guys are.

Cohen: Aren’t some conspiracy theories legitimate? It’s not irrational, for instance, to question the official story about an assassination or a terrorist attack.

Phillips: Sometimes conspiracy theories are indeed reflective of actual conspiracies in the world. And even when the theories are empirically false, they’re often coherent and even rational. People are connecting dots that might not belong together, but from that person’s vantage point, it would be irrational not to connect those dots.

Conspiracy theories became increasingly pervasive and even mainstream in the U.S. after [President John F.] Kennedy’s assassination in 1963, because so many people saw so many reasons to doubt the government’s account. These suspicions had some basis in reality, as the government was cagey about the investigation; if it had uncovered a Soviet conspiracy, World War III very well might have followed. So there was pressure to deflect from that line of inquiry. The result was space for conspiracy theories to flourish, which only intensified after the Warren Commission released its report on the assassination. Although it was supposed to reveal the whole truth of what happened, the report didn’t answer many questions. This, too, gave people increasing reason to doubt the government, and created a whole cottage industry of theorists who felt they needed to do their own research because the government would not provide them with straightforward answers.

Watergate triggered conspiratorial thinking as well. A criminal conspiracy had literally taken place, and the highest offices in the land were engaged in a cover-up. [Republican political operatives broke into the headquarters of the Democratic National Committee in 1972 to aid Richard M. Nixon’s reelection campaign. — Ed.] This was shocking for a lot of people, especially white people. Black and brown and Indigenous people may have been less shocked, because they’d long been screwed over or lied to by the government.

The conspiracies about Roswell, New Mexico, and UFOs kind of go hand in hand with conspiracies about the Kennedy assassination. The supposed UFO crash in Roswell took place in 1947, but it wasn’t until much later — Olmsted notes that the first Roswell conspiracy-theory book wasn’t published until 1980 — that people began to connect government secrecy in Roswell to the overarching belief about the nefariousness of the government. Conspiracy narratives often extend back to encompass things that are not connected. The same process has taken place with QAnon. Originally, Pizzagate — which emerged just before the 2016 election — was a self-contained theory that Hillary Clinton and other top Democrats were running a satanic child-sex ring in the basement of a Washington, D.C., pizzeria. After Donald Trump’s inauguration Pizzagate was retroactively adopted as part of the overarching QAnon narrative about the so-called deep state seeking to destroy the Trump administration from within. Pizzagate couldn’t have made that claim; at the time it emerged, there was no Trump administration to infiltrate.

The comparison between the Kennedy assassination and QAnon raises an important point. Some conspiracy theories have a basis in fact, but many, like QAnon, are absolutely unmoored from empirical reality, and it’s important to acknowledge that they’re false, especially ones that are potentially dangerous. Claims that the coronavirus pandemic is a hoax or just an exaggeration have consequences for millions around the world. But it’s also not helpful to frame these theories as being irrational, bizarre beliefs. When you accuse someone of being “crazy” for believing a conspiracy theory, you’re failing to consider how that theory dovetails with the person’s worldview. I’m not saying we need to hold hands with QAnon believers, but to respond effectively to empirically false claims, you need to have some understanding of why a person believes them. If all we do is mock QAnon believers, we are much less likely to have a conversation that could help that person see the world differently.

Cohen: The JFK assassination occurred shortly after television made its way into many American homes. How did this affect its impact?

Phillips: Television was an integral piece of the puzzle. The presidential contest between Nixon and Kennedy is considered the first truly televised campaign. Part of the reason Kennedy was successful is that he looked better on TV. In their first televised presidential debate, Nixon was super sweaty. [Laughs.] TV existed before that, but, as with all new technologies, it took time for it to become fully integrated into people’s lives and a part of how they understood the world. And once that happened, optics became increasingly important in politics. The civil-rights movement is a great example of that. Television was absolutely critical to the movement. It was able to capture the attention of America, especially white people who hadn’t seen racism like that, or had willfully ignored racism like that, in their own experience. To see John Lewis walk across the Edmund Pettus Bridge and get his skull cracked open was galvanizing, or at least eye-opening, for many white people.

It wasn’t just mass media that changed during that time. Media technology had also become increasingly accessible. The Zapruder film of Kennedy being shot wouldn’t have been possible ten years earlier. A vast information-sharing network arose among conspiracy theorists in the sixties and seventies because more people could use printers and other recordable media like audio- and videotapes to share their ideas with others. It’s not like we didn’t have communication networks until there was Facebook. People were exchanging ideas back then with the technologies that were available, which in turn allowed citizen sleuths to report and amplify their own versions of the Kennedy assassination. They were able to communicate more about all kinds of other real and imagined government conspiracies — and there were lots of real conspiracies in the mix, from Watergate to Iran-Contra — which just increased the theories’ visibility and, for some, plausibility.

What people were able to do back then, of course, pales in comparison to what we can do now. We have way more communication tools today than people did in the seventies. And the people in the seventies had way more tools than people did in the fifties. The underlying pattern is: The more communication tools we have, the more saturated our media environment becomes. And the stronger the networks, the faster the information can spread. Those networks will undoubtedly continue to strengthen as new technologies are introduced, but right now there has never been a time when conspiracy theories could spread as quickly or as easily.

There’s another factor influencing how and why conspiracy theories grew in visibility in the post-JFK era, which Olmsted also highlights: the willingness of many within the center-left news media to repeat the government’s official version of events. Journalists of course broke stories that were not positive toward the government, but even following the publication of the Warren Report, lots of major news outlets framed antigovernment grassroots conspiracy theories — and the theorists themselves — as laughable. When the government was later shown to have been less than transparent, or to have outright lied, conspiracy believers increasingly lumped these journalists in with the untrustworthy “them.”

The idea of trust here is critical. During the same time frame that belief in conspiracy theories ticked up, public trust in institutions plummeted. Olmsted explains that researchers working in 1965 were starting to measure an across-the-board nosedive in trust: increasing numbers of people stopped trusting government officials, news organizations, and the very concept of expertise.

Cohen: How did this result in what you call “asymmetric polarization”?

Phillips: It’s not a straight line, but many of these same forces, especially changes in communication technologies and mistrust of mainstream institutions, pushed the media ecosystem on the Right to become increasingly extreme. The polarization is asymmetric because it hasn’t happened to the same degree on the Left. This goes back decades. In the 1950s evangelical Christians were growing resistant to the increased secularity of the mainstream media. There just wasn’t much room for God on the airwaves. Evangelicals decided to build Christian media networks, especially radio networks, that would serve their own interests, established on the idea that the center-left mainstream media is anti-Christian and made up of elitist intellectuals who don’t care what happens to the Christian “us,” particularly in rural America.

One of the consequences has to do, again, with trust. The more steeped you are in the idea that the “leftist” media is unchristian and bad and out to censor and delegitimize “us,” the more inclined you are to distrust what’s reported by the mainstream news. So people on the Right, who have been hearing these anti-mainstream, anti-leftist messages maybe their whole lives, are more often googling for alternative explanations of news stories. If you’re a Joe Biden supporter and you don’t think institutions are the enemy, you’re also generally likely to think that what The New York Times publishes is factually true. Why would you go to YouTube for an alternate explanation?

But I cannot stress this enough: It’s not that people on the Right aren’t smart. It’s that they are more inclined, and have reasons for being more inclined, to search for alternative explanations. Those searches then trigger algorithms on Facebook, YouTube, Twitter, or Google to start serving up increasingly extremist content — not because Google loves extremism, but because Google is invested in your finding what you want online, or what its algorithms think you want, so you will keep using Google. Then they can monetize your behavior and information.

Cohen: Is this what you refer to as “algorithmic docenting”?

Phillips: Yes. An algorithm is a set of instructions for completing specific tasks. Algorithms can do a whole range of things online, but on social media, one of their most critical roles is determining what people see on-site. They bring you to certain places and highlight certain content, the way a docent would with museum visitors. What you’re guided to can be determined by all kinds of data, from what links you’ve clicked before, to what websites you’ve gone to, even what kind of browser you’re using. Algorithms can also pull information from other users who have searched using the same terms. If people who search for x have also searched for y, that helps the algorithm guess what you want when you search for x. You may feel you are making active choices — you’re searching; you’re clicking; you are doing it yourself — but really your choices are restricted by what’s being offered to you by an algorithm you probably don’t realize is there. You are being given a limited menu to choose from, which limits your exposure to the full range of information online, including opposing viewpoints.

Cohen: How could one of these algorithms pull someone into believing in a conspiracy that, say, lizard people are secretly running the world?

Phillips: You click on one lizard-people video, and YouTube’s recommendation algorithm will serve up another that’s similar. Maybe a bunch of folks who previously watched the video about lizard people went on to watch one about how the moon landing was faked. The algorithm assumes that you, too, will be interested in the claim that the moon landing was a hoax. It’s definitely not going to serve up a video that challenges those views. You’re more likely to be shown additional videos that affirm them.

Now, the average person is not going to go on YouTube, watch a couple of lizard-people videos, and suddenly believe in lizard people. That’s not how cognition works. But if you are already inclined to be sympathetic to a conspiracy theory, the algorithms will inundate you with what looks like evidence to support it. It would be irrational for your beliefs not to be strengthened.

Cohen: This occurs across different platforms, right? You see something on YouTube, and then you go to Google or Facebook and find links to groups or communities that reinforce these ideas.

Phillips: Oh, sure. As I was saying, though, the people who are most likely to become radicalized are already partway there. Safiya Noble, in her book Algorithms of Oppression, talks about Dylann Roof. [Roof is the white man who killed nine Black people at a South Carolina church in 2015. — Ed.] The Internet helped radicalize him, but you don’t go shoot up a Black church when you aren’t a racist to begin with. According to Roof, he had done a Google search for “black on White crime.” What jumps out at me is that he capitalized White and left black lowercase. That seems like a small detail, but it’s the kind of thing that could tell the algorithm he was looking for evidence to support his racism. That’s confirmed by Roof’s own account: all of the search results strengthened his belief that the real problem in America is Black people committing crimes against white people. If someone else had gone looking for similar crime statistics, but their previous behavior online was different, and, say, they had capitalized Black and left white lowercase, there likely would have been different results.

Of course, how different those results would have been is hard to say. But algorithms do pay attention to small details that help them interpret what people want. It’s not that any person in the world could be transformed into Dylann Roof. The issue is the reciprocal relationship that exists between the algorithm and the things we search for and look at online. It’s “garbage in, garbage out”: if you feed Google racist data, then the data that comes out is going to be racist.

We may think that what we’re seeing online is a neutral representation of the world, but that is simply not how these platforms work. We don’t know what the algorithm is not giving us. We need to think about what’s influencing what we see and integrate that knowledge into our searches. Algorithms trick you into feeling confident that you’ve really looked into something: you “did the work.” When people feel like they’ve put in the sweat equity, it’s harder to convince them that they’re wrong.

Cohen: Is YouTube the worst actor in this situation? I’ve heard a lot of criticism of its role in pushing people toward racist or xenophobic viewpoints.

Phillips: It’s difficult to say. Part of the problem with algorithms is that, although we can see their effects, we don’t actually know what they’re doing in the moment. The algorithms are all proprietary, so there’s zero transparency. With YouTube’s recommendation algorithm, you can follow the breadcrumbs it’s leaving for you and see where you wind up, but you don’t know exactly how it’s working: What are the data points? What is it measuring? What is it basing recommendations on? Is it just your behavior, or is it the behavior of other users, too?

Facebook’s news feed is also picking and choosing what you see, but with YouTube it’s easier to observe the recommendation algorithm at work. Many of us have had the experience of going to YouTube to search for something that came up in a conversation — maybe not something we’re normally interested in — and suddenly getting a ton of recommendations for related content.

You may feel you are making active choices — you’re searching; you’re clicking; you are doing it yourself — but really your choices are restricted by what’s being offered to you by an algorithm you probably don’t realize is there.

Cohen: You distinguish between disinformation and misinformation. Can you give some examples of each?

Phillips: Disinformation is false or misleading info that is spread deliberately. Say somebody knows full well that wearing a mask to prevent the spread of COVID-19 is not going to cause asphyxiation, but they share that information anyway. Maybe they want to spread chaos and confusion, or maybe they’re part of a foreign-influence effort. Maybe they have a YouTube series and believe they’ll drum up an audience if they’re saying really sensationalist stuff.

Misinformation is when someone reads that masks asphyxiate people and, fearing for others’ safety, shares this information in good faith, trying to let people know that there’s a danger. So the same exact story can be either misinformation or disinformation, depending on who’s sharing it and why.

I try to avoid using mis- and disinformation to describe the things I see online because you need to be able to determine somebody’s motive to know which is which. I prefer to talk about “polluted information” instead, which doesn’t need to posit intent: pollution is pollution, regardless of why it got there. I want to focus on the consequences of that pollution when it goes downstream.

Cohen: Speaking of the mask debate, I saw a video on social media of a woman at a Target, filming herself pulling every mask off a rack and throwing it on the floor.

Phillips: Yes, the public-mask-freak-out video has become a kind of performance art at this point. Even if the person is not doing it specifically to add followers on YouTube, they’re trying to go viral. The point is to make a point. Social media and recommendation algorithms are biased toward people screaming at each other, so they indirectly encourage that kind of behavior. In the process, they make the behavior seem more common than it is, because the content keeps floating to the top of people’s feeds. The percentage of people refusing to follow public-health protocols is tiny compared to the percentage who are wearing masks. I don’t want to minimize the damage anti-maskers can do — people can die as a result of their actions — but they’re a minority. The overwhelming majority of people in this country — the overwhelming majority of Republicans, too — understand that masks are necessary to prevent airborne transmission of this disease.

Social media creates what journalist Lam Vo has called the “tyranny of the loudest”: the people who are being most obnoxious, who are doing the most-enraging things, produce the content that people click on the most. Social-media platforms are, first and foremost, businesses that sell your data. They want content that keeps you engaged, because then they can get more of your data to monetize. The danger is that these freak-out videos normalize the argument that masks are unnecessary or somehow an affront to our freedom. Seeing so many of these videos has the potential to make viewers wonder, “If all these people are apparently engaged in a debate, doesn’t that mean that there is something worth debating?”

So instead of just screaming back at the people who refuse to wear masks at Target, we also need to think about why those videos become so widely shared, and how our reactions to the videos also help spread them and train algorithms to keep spreading them. The attention economy makes mask protests a bigger story than they actually are, and we all end up harmed as a result. And it’s not just a public-health issue. Because the communities that are disproportionately affected by the coronavirus are Black and Latino and Indigenous, anything that normalizes or incentivizes not wearing masks becomes a racial-justice issue.

Cohen: Since George’s Floyd death in May, the conversation around race has taken on a more urgent tone. Do you think that conversation, in our polluted media ecosystem, has the potential to be hijacked by bad actors?

Phillips: Everything has the potential to be hijacked by bad actors, but I think the conversations are still critical to have. Polluted information and racial injustice are fundamentally interconnected. If you are Black or Latino and poor, you’re more likely to live in an area where you’re regularly poisoned by the air, water, and soil. The same is true of polluted information on the Internet: women and people of color and trans people and disabled people are more likely to be targeted by disinformation or harassment campaigns.

For example, Russia’s election interference in 2016 targeted the Black vote specifically, using disinformation tactics that had been tested on Black communities already, particularly Black women. Shireen Mitchell, the founder of Stop Online Violence Against Women, has done a lot of work on this issue. What Russian operatives did in 2016 was create fake accounts on Facebook or Twitter and then engage with nonpolitical content as if they were a real Black person, to build trust and credibility. Over time these groups would get real people to follow and join them, and that’s when they began to post more-political content. It wasn’t just anti-Clinton propaganda. It was messages like “Don’t vote, because your vote’s never going to be counted. The government is always going to silence Black voices. Why even bother? Both candidates are white supremacists.” They were trying to suppress the Black vote by making Black people feel like there was just no use in going to the polls.

People are not zombies: you don’t just throw disinformation at them and they suddenly change their behavior. But calling into question the validity of the voting process, or sowing doubt about whether or not your vote is going to be counted when there really is a history of voter suppression in this country — that’s a serious tactic. Of course, it’s difficult to tell exactly how successful it was, because you can’t count the number of votes that weren’t cast. But those kinds of tactics were pervasive in 2016, and, as Shireen Mitchell and others have been emphasizing, the risk of digital voter suppression is just as high in 2020.

Cohen: I have a colleague who rejects the idea that social-media platforms are inherently bad. Where do you stand on that argument after twelve years of working in this realm?

Phillips: I think the way social media is designed to monopolize our attention, coupled with asymmetric polarization, has set us up to fail. The tools themselves have bias and structural inequality baked into them. Yes, they can be used for good, too, but they make it very difficult for people to understand the ethical consequences of the things they do and say online. It’s hard to envision the downstream impact of what you retweet, especially if, in your mind, you are very clear on what you’re trying to accomplish: you’re not a racist; you’re just making an ironic joke. You’re encouraged not to think about what happens when content meant for you and your friends goes beyond the intended audience and is weaponized by some other audience. You may never even know this happens. In other words, it’s not that people set out to be unethical online; it’s that we are set up not to know whose toes we might be stepping on. That results in a lot of damage.

Cohen: To deter the spread of polluted information, you’re asking people to practice a tremendous amount of self-examination, critical thinking, and patience. The Internet itself — a world of immediate gratification and endless rabbit holes — is somewhat antithetical to those values. How can people better interact with information they find online?

Phillips: By taking a breath and not spreading a story before you know all the details, for one thing. Facebook and Twitter don’t want us to think very hard about what we’re posting, because if we’re not thinking critically or ethically, we’re sharing more. And if we’re sharing more, we’re more valuable to these companies. We see something that pisses us off, and we retweet it to condemn it. That’s amplification, too. One of the most important things we can do in that situation is take a moment and be mindful of what we’re not seeing, what we don’t understand, what we don’t have context for. You see one image, not the whole news story that frames the image. This limits the scope of your vision. When we are confronted with problematic content, really thinking about what isn’t known is one of the most important things we can do to avoid spreading pollution inadvertently.

Cohen: You’ve said that a law called Section 230 is critical to cleaning up online pollution. Could you explain what Section 230 is?

Phillips: Section 230 was part of the 1996 Communications Decency Act, and it says that online platforms aren’t legally responsible for the content users post on them. If somebody posts illegal content to the platform, the platform won’t be held liable as long as it makes a good-faith effort to remove the content. But that was 1996, when Internet services like Prodigy weren’t playing an active role in promoting users’ posts. The law simply wasn’t written for the modern Internet, where platform algorithms are floating content to the top of people’s feeds. They’re not neutral repositories of user-generated information.

Section 230 also says that it is the platforms’ right, but not their responsibility, to moderate content. They can moderate, but they don’t have to moderate. Whether or not they do is up to them. Consequently platforms often base decisions not on protecting their users, but on what makes the most sense from a business perspective.

Legal scholars Danielle Citron and Benjamin Wittes argue, and I agree, that we need to make those safe-harbor protections contingent on whether or not platforms moderate their content responsibly and effectively. If you fail to get the most egregious pollution off your site, you should lose your legal protection. If that were the deal, these platforms would be falling all over themselves to moderate content appropriately, because without Section 230 their legal liability would be astronomical.

The approach that platforms embrace now — with almost no legal guardrails to stop them — is to protect individuals’ ability to say whatever poisonous thing they want, because that’s the best thing for business, which has allowed the spread of polluted information across our online ecosystem. If, instead, regulation aimed to protect the right of the collective not to be poisoned, it would generate different sets of policies entirely. But that’s certainly not going to happen while Donald Trump is in office.

As a side point, he and other Republicans also have a problem with Section 230, and Trump has threatened to dismantle it — but on the grounds that the platforms are deliberately, conspiratorially censoring conservative voices, thereby forfeiting their status as “platform.” The platforms are doing no such thing, but it’s still an effective threat against tech companies, pressuring them not to moderate far-right pollution.

These platforms’ implicit — and sometimes explicit — claim is that, if there’s racist or otherwise terrible content online, the solution is to encourage more people to say nonracist things, and the nonracist voices will win out. Facebook has been particularly insistent on that point. It’s a nice idea, but it doesn’t take into account the role that algorithms play in promoting the most harmful information, which also tends to be the most profitable information. You have a marketplace that favors falsehood, invective, and violence.

Cohen: You write in your book about how evangelical Christianity became entwined with the Republican Party. As the party has evolved and shifted, do you think evangelical Christianity has also shifted or evolved?

Phillips: Many evangelicals have certainly embraced MAGA extremism. This might seem surprising to some, but it reflects trends decades in the making. The emergence of the New Right, which began in the sixties and reached a pinnacle with Ronald Reagan’s 1980 election, represented the takeover of the Republican Party by evangelicals and was an explicit effort to purge moderate voices from the party. This coincided with — and was strengthened by — the rise of evangelical media systems, and many political operatives who were part of the New Right had contacts with, or were even owners of, these media systems. Anne Nelson’s book Shadow Network lays out all of the overlap there. It’s a fascinating and distressing read. A lot of those networks still exist.

The Republican Party’s march to the Religious Right ensured that there wouldn’t be a lot of Republicans in the moderate center, which further contributed to asymmetric polarization: over time, people who were less extreme in their beliefs were just no longer welcome at the table. From a Christian eschatological perspective, this is not irrational: if you view everything as good versus evil, there’s not a lot of room for compromise. A person with a moderate perspective is potentially a threat to the good. But purging the moderate voices from the party ensured that the more-extreme, less-tolerant voices became dominant. We’re still dealing with the consequences of that thirty years later.

Cohen: You make a distinction in You Are Here between valuing individual freedom “from” limitations versus imposing some limitations to promote collective freedom “for” society as a whole. The most timely example of that is the mask debate. This knee-jerk grasping for individual freedom, combined with a disdain for public-health measures, seems a very American type of selfishness.

Phillips: The distinction between freedom “from” versus freedom “for” is baked into social-media platforms’ moderation policies. The platforms were designed from the outset to favor freedoms “from” restriction. This attitude is inherent to a libertarian worldview of “Don’t ever tell me what to do.” And when platforms’ moderation policies hinge on freedoms “from,” of course you’re going to get a deluge of filth, which is actually going to mute and terrorize — or just simply annoy to the point of shutting up — most users.

Maximizing freedoms for the collective, so that everyone can enjoy them equally, actually grants the most freedom. A social-media platform built with freedom “for” as its foundational ethos would encourage the most speech from the largest number of people — and the discourse would look very different.

It’s the same in a classroom: If an instructor allows the three most aggressive, antagonistic students to dominate the conversation, you’re not going to have more discussion. You’re going to have those three people shouting, and it’s going to be a terrible experience for everyone else. You get the best discussion, the most discussion, and the most freedom when everyone feels comfortable taking part. That’s freedom “for.”

In the United States we have somehow come to see freedom “from” as what freedom actually is, which is astonishing to me. The freedom “from” approach usually undermines itself and ends up limiting freedom. You see the same kind of self-defeating cycle with the mask debate: If we don’t all wear a mask, then there’s going to be greater spread of the virus, and more people will get sick, which will mean more lockdowns. That’s less freedom. When you wear masks for the benefit of other people, you have less spread, fewer deaths, and more freedom.

Cohen: QAnon adherents seem to do a lot of what you’ve referred to as “retconning,” where events from the past retroactively become part of the conspiracy narrative.

Phillips: QAnon is actually the vessel for years and years of conspiratorial thinking. This sense that the government is left-leaning, communist (often code for Jewish), and even Satanist has been around for decades longer than QAnon; it was a core plank of the New Right. Beyond that, conservative folks have been primed for decades, across all kinds of media, to fundamentally mistrust anything associated with liberals. Anything the “libs” said was automatically going to be suspect, biased, evil. QAnon may not have entered many people’s lexicons until recently, but the underlying ideas that allowed it to grow have been around a long time. In the context of a global pandemic, we see just how deadly these conspiracy theories can be — and how easily new information is roped into the conspiratorial orbit. When COVID-19 hit, Dr. Anthony Fauci was portrayed on the Right as a deep-state villain, one of these “elitists” who are trying to tell “us” how to live in Middle America. In this view public-health experts warning about the virus and calling upon scientific knowledge become part of a deep-state conspiracy, and COVID becomes the ultimate example of how the deep state — for many, interchangeable with the “libs” — is trying to destroy Donald Trump’s presidency.

Cohen: And someone doesn’t even have to be a QAnon believer to basically hold these views?

Phillips: That’s right: These conspiracy theories have become so woven into far-right mythology that many people believe in the deep state regardless of whether they subscribe to QAnon or not. It’s not just people on the Right who are amplifying QAnon, either. Yes, there are prominent Republicans, including a handful of people running for Congress in 2020, who are unabashed QAnon boosters. And there are obviously lots of people who believe all or some of the QAnon mythology. But that’s not the sole reason the narrative has become increasingly visible. The Left also gives it considerable energy, which is part of the reason there’s seemingly no escape from QAnon these days. That dynamic, where the Left does a lot of helpful amplification work for the Far Right, is something I’ve been studying for years. The result is that many more people have been exposed to QAnon than would have been otherwise. Maybe that hasn’t resulted in QAnon conversions directly, but for those who believe that everything the mainstream media publishes is a lie, all the center-left news coverage attempting to debunk the theory is easily reinterpreted as further proof that the theory must be true.

Cohen: If news coverage is part of the problem, what is an appropriate media response to something like QAnon?

Phillips: That’s a tricky question. There was an inflection point in 2018. People who either believed in QAnon or just thought it was a fun way to manipulate the media had been trying for months to make it go mainstream by getting coverage from center-left journalistic outlets. The moment that really brought it to the fore was a Trump rally in Tampa, Florida, in August 2018, where a group of people showed up in Q shirts. Some stories had already been written about QAnon — comedian Roseanne Barr had tweeted about it in the previous months, generating a flurry of coverage — but QAnon wasn’t something the average American knew about. After that rally, however, journalists treated it as something Trump supporters believed in en masse. If I could go back to 2018, my recommendation would be: “QAnon does not need to be the central story in American politics; don’t let your coverage become a self-fulfilling prophecy. We should be talking about the Trump administration’s policy of locking babies in cages!” That was the big story of the moment in August 2018, when QAnon hit the tipping point.

Once a story tips into mainstream consciousness, however, not reporting on it isn’t really an option, especially if the information potentially endangers people. After that Tampa rally in 2018, the QAnon narrative didn’t just grow in visibility; it roared to the center of a number of other massive stories, like Jeffrey Epstein’s arrest [for sex-trafficking], Trump’s impeachment, and now COVID. I don’t think it’s possible not to talk about QAnon anymore. But there are definitely better ways to do it. The trick is not to focus on QAnon — or any conspiracy theory — as a self-contained, stand-alone story. You’ve got to situate it within broader stories and forces. The ins and outs and intricacies of the QAnon narrative and the specific people who believe it make up a smaller, less revealing, much more polluted story. News stories shouldn’t be exclusively about polluted information. The rest of the ecosystem needs to be taken into account.

Cohen: Earlier this year MSNBC and CNN started cutting away from President Trump’s coronavirus briefings, because he was spreading dangerous misinformation. Was that an appropriate response?

Phillips: Some things can’t be ignored. Donald Trump is the president, and what he says does have consequence. But Donald Trump, too, is just one part of our ecosystem. When you’re trying to report on something Donald Trump says, it’s critical to contextualize that statement within the broader ecosystem and not just reiterate what he is saying.

Of course, space restrictions mean you can’t go into the entire history of the United States every time you publish something, but you can give readers a more well-rounded picture of Trump’s place in the world. He is clearly part of the story, but he’s not the only story. Other factors facilitate him or embolden him or have motivated what he’s done. It doesn’t have to be a binary question of: ignore him or don’t ignore him. It’s a question of how to contextualize him, to best communicate what the public needs to know about a media landscape of which he’s a big part, but certainly not the sole driver.

Cohen: As many media outlets have been decimated by the Internet, they have had to figure out new ways to sustain themselves. Part of that is finding something that people are going to flock to, and often that is the latest outrageous thing the president said.

Phillips: Journalism has always been this way. It’s just become more intense as time has gone by. Now you need to fill not just the twenty-four-hour news cycle but the social-media equivalent, where every second of every day has to be filled with new content. Reporters often don’t have the choice of saying, “I don’t want to cover Trump’s comments on Goya beans. I don’t think that’s worth my time and attention.” If you want your paycheck, you’re going to have to write about whatever Trump says about beans. That’s the attention economy at work; journalists are trapped within it as much as everyone else is.

Facebook and Twitter don’t want us to think very hard about what we’re posting, because if we’re not thinking critically or ethically, we’re sharing more. And if we’re sharing more, we’re more valuable to these companies.

Cohen: Do you think the Internet has to some degree traumatized us?

Phillips: A fundamentally polluted information ecosystem is as much a mental-health crisis as it is an informational crisis. We can’t just focus on the textual analysis and not consider what the human beings who read those texts are experiencing. It’s disorienting to try to navigate an information ecosystem as polluted as ours. It’s devastating to democracy and to our ability to build consensus and come up with policies that might address our problems. When our information ecosystem is polluted, we can’t talk to each other. Speaking for myself, I am overwhelmed. I do my best, but I don’t always know how we should respond to the latest thing Trump says. The journalists I’ve worked with and spoken to are often at a loss, too. Talking about fact-checking and critical thinking and other ways of analyzing content can be helpful with some audiences, but those same strategies can also backfire. Very often even the best options are still messy.

Cohen: Will media literacy be easier for younger generations to develop, or will it be harder because they’ve been raised in a polluted media environment?

Phillips: Considering kids in particular, we need to begin teaching them to take a broader ecological view of the information landscape, one that focuses on downstream consequences. Just focusing on fact-checking and textual analysis won’t be enough. And we shouldn’t be teaching this just to kids; we need to teach teachers and parents how to teach those kids. If we want our kids to learn how to use social media effectively and ethically, with an eye toward minimizing the spread of polluted information, the adults in their lives first need to be clear about how to do those things. Moving forward, I want to work with educators to try to figure out how we might have this conversation not just with sixteen-year-olds, but with four-year-olds, because frankly I’m most worried about them.

Cohen: You’ve sifted through some dark corners of the Internet studying this pollution. How have you managed to keep your own psyche unpolluted?

Phillips: I struggle a lot with what it means to do this work. Focusing on everything that’s terrible on the Internet has been my job since 2008. I have needed to contend with some serious mental-health consequences, with varying degrees of success.

In February, when the coronavirus was creeping up in other parts of the world but wasn’t here in the U.S. yet, I was experiencing panic attacks while on the phone with reporters trying to talk through these issues, because they’re so daunting. That anxiety has only gotten more intense as the situation has grown more dire. Even talking to you is exhausting, because when you add COVID into the mix, the dangers are so much more immediate. We have to get it right. I don’t know if we can. I don’t know how exactly that would look. But I do know we have to give it everything we have, because there’s so much to lose if we don’t. So, like other people who do this work, I do what I can to help, even though it makes me sweaty and anxious.

We’re not going to solve these problems unless we can all see the structural challenges we face and understand what is required of us. This is where I feel some hope. After ten years of having the same conversations and laying out the same stakes over and over, I’ve seen everyday people start opening up to hearing about these issues. So have many journalists, in my experience. Technology companies have bent over backward to make sure people don’t have these conversations, because if we all understood what these technologies do to us and to democracy, people might be less willing to stay on Facebook just to see pictures of their grandkids. These corporate entities need us not to question what it is they’re monetizing. They need us not to ask what our responsibilities are to each other. Therefore that is exactly what we have to start doing.