Fake News Is a Marketing Feature, Not a Hack: Part 1

Travis McKnight

Truth is the arbiter of reality; a sacred, unbiased, and unwavering lens we view and understand the universe through.

At least, truth used to hold this esteemed responsibility.

In the digital world, unearthing “truth” is no longer simple or reliable. Search results, social media, legitimate and illegitimate news organizations, and paid advertising overflow with misinformation and disinformation.

This phenomenon is summed up in two words: fake news.

Fake news is the most powerful and socially destructive marketing technique in the 21st century. The fake news pandemic is global, unyielding, and we are all susceptible to its infection. Our widespread vulnerability is exactly why it’s crucial for marketers to understand why and how disinformation is created, spread, and—most importantly—combatted.

This article is the first in a three-part series about the relationship between fake news and marketing. This post lays the groundwork needed to ensure we’re all on the same page about what fake news is, why it’s a problem, and how it relates to marketing. The second article talks about how disinformation affects our brains and manipulates our behaviors. I wrap up the series by discussing how the business model perpetuates fake news and what can be done about it.

What is Fake News?

In the simplest terms, fake news is optimized disinformation.

Optimized disinformation has a veneer of legitimacy and commonly rewrites “the truth” using advertising, fabrication, manipulation, political satire, and propaganda.

Usually, this tactic is employed to manipulate the beliefs, motivations, and actions of like-minded people. The strategy is also used to sow confusion around polarizing topics, stymy constructive public discourse, and erode trust in traditional paragons of truth, like scientists, journalists, and healthcare officials. In some egregious cases, optimized disinformation is used exclusively as a marketing and money-making tool, such as the InfoWars Sandy Hook conspiracy.

Although disinformation polymorphs into many nebulous disguises with equally shadowy goals, few types of disinformation are more sinister—or more effective—than junk news, which exploded in popularity during the 2016 U.S. presidential election.

Because junk news is so popular and spreads effectively on social media, it is the primary type of fake news I reference most in this series.

In the research paper, Disinformation Optimised: Gaming Search Engine Algorithms to Amplify Junk News, Samantha Bradshaw, a doctoral researcher at the Oxford Internet Institute, defines junk news as a website that misleads people by using at least three out of the following five deceptions:

  1. Professionalism. Junk news sources do not employ the standards and best practices of professional journalism, including information about real authors, editors, and owners.
  2. Style. Junk news relies on emotionally driven language, ad hominem attacks, mobilizing memes, and misleading headlines.
  3. Credibility. Junk news needs and employs false information or conspiracy theories, and sources do not post corrections.
  4. Bias. Junk news sources are highly biased, ideologically skewed, and publish opinion pieces as news.
  5. Counterfeit. Junk news websites mimic established news websites and reporting techniques, including fonts, branding, and content strategies.

Junk news websites that the study evaluated include InfoWars, Breitbart, Zero Hedge, CNS News, Raw Story, The Daily Caller, and The Federalist. There are also more than 450 hyper-partisan websites that often get labeled as fake news, but these groups usually peddle inflammatory misinformation rather than blatantly optimized disinformation.

Granted, at times the distinction is fairly thin. And unfortunately, Google doesn’t do a great job of distinguishing between legitimate and junk websites.

How Does Disinformation Relate to Marketing?

Disinformation campaigns maliciously leverage every nuance of the surveillance-based business model that search engines and social networks are built around.

I’ll dive into the nuts-and-bolts of this topic in the third article of this series, but here are a couple of examples until then.

First, let’s look at junk news and advertising.

In 2019, The Global Disinformation Index, a UK nonprofit that rates websites’ trustworthiness, analyzed programmatic advertising rates among 1,700 junk news websites. The analysis shows that 70 percent of these websites had programmatic advertising and earned $235 million from those ads.

Several of the household-name brands mentioned in the GDI report that inadvertently bankrolled junk news sites include Audi, Sprint, Honda, Office Max, American Airlines, Casper, and Oxford University.

Now, let’s take a gander at organic search.

In 2016, Google’s search algorithms failed miserably at providing accurate information to an extremely serious question: did the Holocaust happen? At the time, the answer was “no.”

As The Guardian reported, the top result was a link to the article, “Top 10 Reasons why the Holocaust Didn’t Happen,” published by stormfront.org, a neo-Nazi site. The algorithmic failure didn’t stop there. The third result was the article “The Holocaust Hoax; IT NEVER HAPPENED.” The fifth ranking was owned by “50 Reasons Why the Holocaust Didn’t Happen.” The seventh position was a YouTube video, “Did the Holocaust Really Happen?” And the ninth result was “Holocaust Against Jews is a Total Lie – Proof.”

Since this event sparked global outrage, Google tweaked its algorithm to change the search results and prevent similarly optimized disinformation from ranking for the term.

The algorithm changes had a noticeable effect on four junk news websites with significant organic keyword growth (InfoWars, Zero Hedge, Daily Caller, and Breitbart), the Oxford Internet Institute report shows. Since August 2017, the report states that all four top-performing domains appeared less frequently in top-positions for non-branded Google searches, based on the keywords they were optimized for.

Despite the progress, Google’s algorithms still have a long way to go.

For example, take the phrase “climate change hoax.” Ahrefs shows the phrase gets 1,000 monthly searches. As of Oct. 26, 2020, three of the top 10 results are disinformation:

  • 31,000 Scientists Say ‘No Convincing Evidence’
  • Climate Change 7: How Global Warming is Both a Hoax and Legitimate Area of Study
  • Climate Change is a Hoax

The Consequences of Fake News

Before the 2016 election, I was naive about the insidious reach and power junk news websites and fake news have across the world. Here are the three significant consequences of fake news that we’ve seen unfold in the past few years.

Conspiracies

Pizzagate, QAnon, birtherism, climate change denial, anti-vaxxers, Holocaust denial, COVID-19 being fake … the list of new, widely-supported conspiracies is nearly endless. For years, algorithmic failures and a lack of gatekeeping at tech giants like Google, Facebook, and Twitter perpetuated these shared delusions and allowed conspiracies to flourish.

Some companies are taking action against some conspiracy-related fake news, such as Twitter and Facebook shutting down QAnon accounts in July of 2020, but these reactionary measures are often too late.

Unfortunately, the fake news marketing tactics conspiracy pushers use have already proved successful and influenced their target audiences’ beliefs.

As an example, let’s look at how fake news marketing amplified the absurd QAnon conspiracy, which is associated with a string of violence and that the FBI labeled a domestic terrorist threat in 2019.

The conspiracy, in case you’ve remained blissfully unaware, spawned on 4chan in late 2017 and notions that Donald Trump is secretly fighting against a “deep state” cabal of cannibalistic child sex-traffickers and satanic cultists. Since its inception and rise in popularity, the conspiracy’s ideology became more malleable and adopted other popular delusions, such as promoting the lie that COVID-19 does not exist.

Ironically, the COVID-19 pandemic spurred the growth of QAnon exponentially. In March of 2020, membership of the largest public QAnon Facebook groups grew by 700 percent, the BBC reported in July. The report ties the popularity growth to increased internet use and more exposure to junk news and social media disinformation during quarantine.

An October report by CBS and Wired highlights how data collection techniques, marketing tools, and content recommendation algorithms from Facebook, Twitter, YouTube, and Google create a self-fulfilling prophecy and “rabbit-hole” for users who search for QAnon content, or have demographic markers that associate them with users who participate in these conspiracy groups. These systems start pushing advertisements, videos, hashtags, trending content, sponsored content from junk news sites, people to follow, and online communities that provide users with more and more conspiracy content and confirmations.

The conspiracists transformed from having a negligible amount of political power to earning enormous political capital in an extremely short amount of time.

As of September 2020, a Daily Kos/Civiqs poll shows that 86 percent of Americans have at least heard of the QAnon conspiracy, compared to 65 percent in 2019. Here’s the poll’s breakdown of engagement from the two major political parties:

  • 33 percent of Republicans believe that the QAnon theory is mostly true
  • 23 percent of Republicans say that some parts of the QAnon conspiracy are true
  • 13 percent of Republicans think that it is not true at all
  • 72 percent of Democrats say the QAnon conspiracy theory is not true at all
  • 14 percent of Americans have not heard of QAnon

These conspiracy zealots are now actively shaping the landscape of U.S. politics. There are currently 24 congressional candidates who publicly support and advocate for QAnon conspiracies and are on the ballot for House races in the 2020 election.

Election Interference

As the 2016 and 2020 presidential elections show us, election interference and widespread voter misinformation are the bread-and-butter outcomes of fake news campaigns. Enormous amounts of content have been written about this subject, so I’m not going to rehash that information or harp on it. Instead, I’ll quickly highlight just how easy voters’ behaviors make these tactics to execute.

July and September reports by The Pew Research Center show that 26 percent of U.S. adults get their news from YouTube, and 18 percent of U.S. adults say social media is their primary source for political and election news.

Of the YouTube news crowd, 23 percent get their news from independent YouTube channels, and 14 percent of these channels publish videos that are primarily dedicated to conspiracy theories.

For the traditional social media crowd, only 17 percent can answer at least eight out of nine questions correctly about foundational political knowledge, such as which party supports certain policy positions.

These statistics soften the blow for the following two highlights of the Pew reports:

  • Social media news consumers are 68 percent more likely than people who consume traditional media to report seeing made-up news related to the coronavirus pandemic.
  • Only 37 percent of social media news consumers are “very concerned” about the impact of misinformation on the 2020 election.

During the 2016 election, Twitter users shared as much “junk news” as professionally produced news about politics, the Oxford Internet Institute reports. When fake news gets in the hands of its target demographics, users are doing an excellent job at spreading the disinformation without being any the wiser.

And when fake news spreads, it does so extremely fast.

“True news, for example, took about six times as long as false news to reach 1,500 people on average—and false political news traveled faster than any other kind of false news, reaching 20,000 people almost three times as fast as other categories reached 10,000,” Vice reported in 2019.

When this false news is politically inflammatory and has the KPI of manipulating and intimidating voters, the outcome can sway elections and invalidate votes.

As NPR reported in October, “One false rumor circulated in Texas that bar codes on mail-in ballot envelopes can reveal personal information, including whether the voter is a Republican or a Democrat.” After receiving several ballots with blacked-out bar codes, Tarrant County Elections Administrator Heider Garcia took to Twitter and posted a video warning voters that this could lead to their ballots being rejected.

Although the number of people who succumb to these disinformation campaigns may be the minority of voters, sometimes that’s all it takes. For example, in the 2000 presidential race, 0.01 percent of votes swung the election. In the 2016 election, 0.72 of Pennsylvania voters decided who won the state.

Radicalism

Extremists have a long history of using disinformation and propaganda to gather recruits and sway minds. But, as Cambridge Analytica’s 2016 disinformation campaign showed, the sheer reach and pinpoint user targeting of paid advertising and social media has spurred growth in these ideologies.

In the European Union’s 2019 case study, Understanding Citizens’ Vulnerabilities to Disinformation and Data-Driven Propaganda, researchers analyzed how social networks became platforms where disinformation gets spread exponentially fast. They determined that fake news, often political propaganda, would surge in popularity and then be uncritically picked-up and redistributed to an even larger audience by traditional media outlets and junk news websites.

The EU report’s findings demonstrate that today’s society is increasingly vulnerable to disinformation operations. The vulnerability stems from “information overload and distorted public perceptions produced by online platforms algorithms built for viral advertising and user engagement.”

When content creators and bad-faith actors push content that spews actionable polarization through these disinformation channels, such as the George Soros conspiracy theories or the COVID-19 disinformation infodemic, the results spread radical ideologies and behaviors to larger user groups, some of whom eventually take action. Three recent examples include:

  1. The man who mailed pipe bombs to 16 prominent democrats
  2. The man who tried to assassinate congressional republicans while they played baseball
  3. 14 members of an anti-government “militia” group who got arrested by the FBI for planning to kidnap and execute the governors of Michigan and Vermont

A 2019 report by PBS shows how paid advertisements, content amplification options, and more “have rejiggered the landscape of content visibility on social media websites” to inspire more radical behavior.

But how does somebody go from reading a false blog article or seeing a politically charged meme to becoming a domestic terrorist?

It’s all about how these stories are made and marketed.

How Fake News is Made

The marketing strategy behind fake news—despite being entirely unethical, dangerous, and socially destructive—is brilliantly executed.

As research from the University of Pennsylvania shows, modern fake news content is carefully designed so its target users can’t detect that the information is false. Understanding this tactic is crucial to seeing how fake news is marketed.

Analysis by the University of Pennsylvania researchers, alongside scores of others, shows that we’re too smart to be deceived by fake news designed for someone else, but most of us can be duped and manipulated by the fake news designed to deceive us as individuals.

There are a lot of psychological and physiological factors in play here. I dive into those issues in the next article. But from a marketing perspective, fake news creators essentially perform user research to build personas and craft their message individually for each group of readers. This personal message taps into their users’ passions and desires to elicit the strongest possible emotional response. From a marketing perspective, this tactic is robust audience targeting at its very worst.

In many ways, modern fake news relies on harvesting and analyz­ing personal information, such as user behavior data meant for advertisers or geo-location history data from cellular service networks.

Let’s take a look at how these pinpoint-accurate personas are used to create tailored, targeted, and effective disinformation campaigns on an industrial scale.

The Fake News Formula

“The first part of fake news is crafting a lie, backed up with a set of supporting arguments selected because they will convince the intended readers, not because they are true,” the University of Pennsylvania’s Wharton report states.

After the lie is chosen, a big balancing act comes into play. Every fake news campaign creates a set of different lies, and each set of lies are backed up by a different set of supporting arguments. Now, both the lies and supporting arguments must be designed to convince a different group of readers. If you get the balance wrong, the wrong user group gets the disinformation and the plan is foiled.

“A fake news story sent to the wrong readers will produce a backlash, since many readers will be able to detect the false statements and deliberate misinformation in fake news designed for others,” the Wharton report argues. “They will sense the intended manipulation, and react negatively to it.”

If I were to create fake news, here’s how I could break down these strategies into an actionable process, according to the Wharton report.

  1. I decide what I want you to believe.
  2. I learn what your current grievances are:
    What makes you furious?
    What do you think has been taken from you?
    What do you want to regain or retain?
  3. I learn what you do and don’t already know, and what you do and don’t already believe.
  4. Now, I construct an argument that explains how you were unjustly deprived of what you want. The people who took what you want away? They’re the people I want you to oppose.
    My lie is carefully constructed with supporting evidence, which relies on the data that I have selected because I know it will elicit a strong emotional response.
    The claims I use as evidence don’t need to be true. All I need is that you don’t know that I’m lying when you encounter the evidence.
  5. Now you’re riled up and in an emotional state. When we’re emotionally charged, we share more on social media. And unless you fact-check my lies, which is unlikely because I know you already ideologically align with my argument, you’ve now spread my lies to other like-minded people.
  6. The cycle continues until it reaches somebody who notices or fact-checks my lie and the “evidence” supporting it.

That’s the entire process. In many ways, this tactic is Mad Men marketing on steroids. Plus, as you probably noticed, this step-by-step technique mimics the honest and authentic marketing formulas we rely on to serve users.

And the terrifying part of this entire strategy? It’s much, much easier to design an effective lie for each target audience and then amplify it on a junk news site and social media using SEO, influencers, and paid advertising. Unfortunately, it’s significantly harder to design a compelling, easily understood counterargument that convinces the recently duped audience that they were fooled.

What Can We Do?

As marketers, our ingenuity and well-honed strategies got the world into this mess. After all, fake news creators simply reverse-engineered our marketing techniques for their nefarious goals. Fortunately, this means that we should be able to devise a solution to this problem.

Here’s my proposal: We need to sell “truth.” Truth must become a hot, desirable commodity. Make people yearn to know if information is false. Remarket fact-checking as the next lifestyle trend.

I know, this idea sounds ridiculous! But it’s possible. The first step is to learn the science behind why we are susceptible to lies and fake news, and then design a strategy to hijack those vulnerabilities for good.

I dive into the science behind why fake news is so compelling in the next article. Check that one out for all of the nerdy goodness.

Travis McKnight

Travis McKnight

Content Strategy Architect
Content Strategy Architect

Prior to migrating to digital marketing, Travis spent many years in the world of journalism, and his byline includes The Guardian, New York Magazine, Slate Magazine, and more. As a Content Strategy Architect at Portent, Travis gets to apply his passion for authentic storytelling to help clients create meaningful content that consistently delivers a refined experience for users.

Start call to action

See how Portent can help you own your piece of the web.

End call to action
0
Close search overlay