The widespread creation, dissemination, and adoption of fake news and other forms of disinformation have devolved the 21st century from The Information Age into The Disinformation Age.
In November, I discussed how fake news is the world’s most powerful and socially destructive marketing technique. If you haven’t read that article, I recommend skimming it so we’re on the same page about what fake news is and why it’s a problem for society and marketers. In January, I also wrote about how fake news manipulates our brains to create false memories that change our perception of the truth. That article provides useful context but reading it is not necessary to understand this post.
In this article, the final part of my fake news series, I break down how the digital marketing ecosystem perpetuates fake news and what we can do about it.
Digital Marketing’s 1984 Business Model
Digital marketing exponentially propagates fake news and online disinformation. The bad-faith actors who peddle fake news are not stylish James Bond villains who manipulate, hack, or subvert social media platforms and search engines. These people can’t bend the world’s most influential companies to their whimsy. Instead, disinformation traffickers use the same tools as any skilled marketer to achieve their unscrupulous goals.
By relying on digital ad revenues as its primary source of profit, Facebook created the mess society is in, Doug Guilbeault, an assistant professor at the University of California, Berkeley, argues in the Columbia Journal of International Affairs.
Guilbeault blames Facebook’s development of hyper-targeted ads in 2006 for us entering The Disinformation Age because, much like Google, Facebook designed its platform to influence users on behalf of marketers.
He’s right.
In digital marketing, personal data is the most valuable commodity in existence. We are the product for social media and search engine companies. These companies collect demographic and behavioral data to create micro-targeting algorithms. Advertisers and marketers then buy access to this data to push content and ads via the platforms’ predictive user-behavior models.
The more data these companies collect, the better they can tweak their platforms to improve advertisers’ conversions.
This surveillance-style business model exploits us as users and creates a strenuous relationship between progressive ethics and profit. For some of these companies, like Facebook, Twitter, and YouTube, the people who peddle disinformation, propaganda, memes, and junk news sites are a lucrative demographic.
For example, Facebook insiders tell BuzzFeed News that the company primarily focuses on growing revenue; facts and user safety come second. Facebook’s symbiotic relationships with disinformation peddlers, scammers, and hackers helped the company earn an estimated $80 billion in ad revenue in 2020, BuzzFeed reported.
The result of such unaccountable exploitation is a global economy built on dishonesty, manipulation, and maximum profit—the drastic social consequences be damned.
The Money Problem
In April 2018, Congress summoned Mark Zuckerberg to testify about how Facebook mishandled user information during the 2016 U.S. presidential election, which allowed fake news to feverishly spread on the platform like a wind-whipped wildfire.
Zuckerberg’s testimony tip-toed around why digital marketing is a contributor to online disinformation; he also created the false impression that digital marketing drives disinformation only if vulnerable to manipulation, Guilbeault states.
Zuckerberg’s reasons for painting Congress an amorphous picture make perfect sense; after all, he’s only beholden to profits and a slew of affluent shareholders, not society.
But by relying so heavily on ad revenues, social media, and organic search, companies backed themselves into a corner where they face a paradox of incentives.
To achieve its current user base, Facebook had to initially make its user experience beneficial enough that people flocked to the service and promoted it through word of mouth. Granted, Facebook engineered its interface to make the platform addictive and maximize the opportunities to extract users’ behavioral data, according to Sean Parker, the ex-president of Facebook. So they didn’t have to try too hard at making a good product.
Likewise, after Google fine-tuned its user experience and gained popularity, it purposefully became an industry monopoly by devouring or otherwise eliminating the competition, according to emails unearthed in Congress’s antitrust investigation.
But I digress. As marketers, we’re stuck with these ethically shady companies until they’re usurped—if that’s even possible.
Ideally, Facebook and Google would treat their users equally and follow a utilitarian design philosophy. Unfortunately, as Guilbeault points out, digital marketing’s logic demands that certain demographics are more valuable than others. Specifically, based on an exploitative ad-driven business model, the only important users are advertisers and people with enough time and money to make online purchases.
These business model choices are why Facebook’s timeline exists; they’re why Google is obsessed with continually thrusting organic results further down the SERP; it’s the “magic” behind Twitter’s trending algorithm, Guilbeault states.
Ultimately, because of how popular social media websites and search engines are designed, from the UX to the business models, it’s disadvantageous for these companies to stop the proliferation of fake news. Doing so would reduce data collection and eliminate ad revenue. The companies have little-to-no incentive to make such sacrifices.
So far, the only tangential exception to my argument is for political advertisements. And, while I’m glad an exception exists, it’s unfortunately only temporary and constrained.
Accountability vs. Profits: The Heavyweight Bout
To combat the spread of ad-based disinformation about results for the 2020 U.S. presidential election, Facebook and Google placed a temporary moratorium on political ads after polls closed on November 3rd. In mid-November, as Donald Trump and his allies began signaling their coup d’etat attempt, both companies extended their bans.
As for Twitter, it stopped accepting political ads in 2018. The company also began to restrict how “issue-based” ads, which discuss a policy position without naming specific candidates, can be targeted. So, unlike during the 2016 and 2018 elections, Twitter’s ads weren’t much of a problem in 2020.
Despite Facebook putting a pause on political ads, the company still allows other fake, misleading, and dangerous ads to rampage through users’ feeds.
“This year [Facebook] took money for ads promoting extremist-led civil war in the U.S., fake coronavirus medication, anti-vaccine messages, and a page that preached the racist idea of a genocide against white people,” the BuzzFeed News report stated.
Facebook’s reluctance to hold advertisers accountable for fake information became even more obvious after the company followed Google’s suit and temporarily lifted its indefinite moratorium on political ads because of the Georgia runoff election. Google and Facebook chose to lift their bans on December 10th and 16th, respectively. And as I’m sure you anticipated, the amount of disinformation and false advertisements skyrocketed.
Some people took advantage of the ban lift in clever ways, like the politicians who abused loopholes in Facebook’s policies for fundraisers and personal profit. This list includes Sen. Ted Cruz (R-TX), Sen. Mitch McConnell (R-KY), Sen. Mike Lee (R-UT), and Sen. Kirsten Gillibrand (D-NY), The Verge reported.
Other people wasted no time in unleashing a slew of ludicrous fake ads, such as an ad from the Republican Party in late December, which falsely claimed Rep. Nancy Pelosi (D-Calif.) plotted to help Vice President-elect Kamala Harris remove President-elect Joe Biden from office, The Washington Post reported.
And, of course, there are plenty of fake ads about the Georgia election. Here’s one example. The super PAC American Crossroads ran Facebook attack ads against Pastor Raphael Warnock, one of the Democratic candidates in the GA race. The smear campaign began when Sen. Kelly Loeffler (R-GA), Warnocks’ opponent, used an out-of-context quote from Warnock about a sermon from Rev. Jeremiah Wright. Politifact, one of Facebook’s fact-checking partners, debunked Loeffler’s claim, giving it a “Mostly False” rating. Despite this, Facebook has allowed the ad to use disproven information.
To its credit, Facebook took the ad down—five times.
Facebook allowing this misinformation-fueled advertisement to be repeatedly published is indefensible. Advertisers who peddle fake information should be banned. Repeat offenders should receive permanent bans.
Alas, as Elizabeth Warren showed in 2019, Facebook doesn’t care about ad accuracy so long as the company is making money, which they do whenever an ad campaign is live.
In October 2019, Donald Trump’s re-election campaign ran a Facebook ad falsely claiming that “Joe Biden promised Ukraine $1 billion if they fired the prosecutor investigating his son’s company,” The New York Times reported. Naturally, Facebook accepted the false advertisement.
As I discussed in the second part of this series, simply seeing this false advertisement would trick a user’s brain. Subconsciously, the user becomes susceptible to the illusory truth effect, making people believe a false claim when it’s repeated often enough.
Pundits and politicians challenged Facebook’s brazen disregard for the truth. Facebook chose to keep the ads live and rake in money from Trump’s campaign.
To prove the absurdity of Facebook’s policy, Sen. Elizabeth Warren (D-Mass.) ran a Facebook ad with an intentionally false claim. Her ad stated that “Mark Zuckerberg and Facebook just endorsed Donald Trump for re-election,” the New York Times report states.
Facebook responded to Warren’s display via Tweet, saying, “@ewarren looks like broadcast stations across the country have aired this ad nearly 1,000 times, as required by law. FCC doesn’t want broadcast companies censoring candidates’ speech. We agree it’s better to let voters—not companies—decide.”
Facebook’s response ignored the blatant difference between its platform and other media. The FCC regulates what candidates can say in print and broadcast advertisements; however, there are no government regulations controlling online political ads’ content and validity.
In the United States, political parties, campaigns, and outside groups like super PACs are free to run any ads they want, if the platform or advertising network lets them, according to a report about fake news in advertising by The New America Foundation.
“This gives companies like Google, Facebook, and Twitter tremendous power to set the rules by which political campaigns operate,” the New America Foundation report says.
And although social media platforms will occasionally regulate some political advertisers who peddle false information, like the Warnock attack ads, the companies take the opposite stance regarding false information spread directly by politicians.
The Public Figure Problem
Unfortunately, I can’t discuss fake news and fake advertising on social media without talking about the orange elephant in the room: Donald Trump.
Trump’s flagrant disregard for the truth during the past five years, coupled with his penchant for promoting unfounded conspiracy theories, put social media companies in a tough bind. Do they regulate the obviously false—and quite often dangerous—disinformation Trump and his allies share and set a moral precedent, or do they milk the money cow until the manure reeks too much?
We all know which option they chose.
Under the guise of advocating for public discourse, social media platforms let Trump and his allies incinerate any lingering threads of truth in American politics. While Trump spoonfed radical disinformation to his allies, his opponents threw themselves into the unyielding wave of fake news to lambast social media platforms, politicians, and random users—quite often with more fake news, character attacks, or misinformation.
Once users shared the disinformation content on Facebook and Twitter, the platforms’ content-shaping algorithms promoted the content to people in the users’ networks who are more likely to click, like, and otherwise engage with the information, the New America Foundation report states.
These algorithms rapidly spread the disinformation, allowing unregulated political leaders to reach and infect audiences well beyond their followers or targeted segments.
Add in disinformation-fueled attack ads and junk news sites to further promote popular fake news and sow social discontent, and the symbiotic relationship social media has with advertisers, marketers, content creators, and politicians began to smother social progress.
With the disastrous outcomes of widespread disinformation growing apparent and the threat of legislation mounting, Facebook and Twitter eventually began to slap disinformation warning labels on misleading content from select high-profile political leaders, Trump included. Sadly, as I discussed in the second part of this series, studies show fake news warning labels have little-to-no effect in changing people’s minds and damming the misinformation tsunami from overwhelming society with chaos.
Even if these warning labels worked to stem new disinformation hotspots, years of fake news frayed our collective sense of truth. This brainwashing made our perception of facts more malleable than water.
Ultimately, the choice to not immediately regulate Trump’s lies and declarations of stochastic terrorism culminated in the insurrectionist attack in the U.S. Capitol Building on January 6, 2021 which left five people dead and scores wounded.
Trump’s video response to the insurrection, in which he condoned rather than condemned the violent assault, proved to be the last plop of manure for some media companies. Facebook, Twitter, Instagram, Twitch, YouTube, and Shopify indefinitely or permanently banned Trump from their platforms. And it may come as no surprise that online misinformation about the 2020 U.S. election fell by as much as 73% after Trump was cut off from these social media sites, according to findings by Zignal Labs.
The Business Model Solution
Society’s concept of truth is in shambles because social media companies refused to self-regulate fake news and false advertising until it was too late.
If we’re going to see a comforting sunrise sear away this dark hour in humanity’s history, we can’t let social media companies self-regulate. These companies are arguably some of the most powerful organizations globally, and unaccountable organizations shouldn’t have the ability to brainwash society for profit without severe consequences.
We need our political leaders to create meaningful policies that minimize the social damage from social media and organic search engine abuse. Politicians must do the dirty work because these platforms won’t design ethical and effective anti-fake news policies quickly, since any changes that minimize the impact of fake news also directly reduce revenues, an analysis by the University of Pennsylvania argues.
And unlike the FTC’s online advertising laws, which solely pursue the advertiser for liability and damages, these new laws must also hold social media platforms accountable for the advertisements they allow.
I believe that it’s unrealistic for social media companies to effectively monitor all fake content from every user. But we should force these companies to ensure that information shared by advertisers and high-profile, influential users and politicians is accurate and honest.
Here’s my proposal for the United States.
First, Congress or state governments create a small tax on major social media and search engine companies to fund full-time, non-partisan fact-checking NGOs that are staffed with reputable journalists, researchers, and scholars. These groups will fact check the content that users or algorithms flag as false.
Afterward, if the flagged content is proven to be false by at least two of these third-party fact-checkers, take the content down and penalize the user. Then, algorithmically remove any post that shared the high-profile user’s false content. Replace the post with a call-out to review the fact check.
High-profile accounts would also gain the option to label posts as opinions, which could circumvent some of the fact-checking rules for clearly opinion-based content. However, fact-based content, such as certified election results, would still be held to a higher standard.
And if social media platforms don’t follow the rules to protect society, fine the companies—heavily—and then donate the hefty fines to public education programs that promote critical thinking skills. Such programs can help the next generation recognize and combat disinformation.
As a bonus, a policy that forces social media platforms to police high-profile accounts may also help confront a pervasive news media problem that Vox calls “flooding the zone with shit.” The zone floods when the news media latches onto politicians or influencers who promote so many competing narratives, much of which involves misinformation, that even well-educated and well-intentioned people don’t know what to believe, Vox argues.
I’m sure you think that this solution negatively affects freedom of speech. But remember, social media platforms are private entities, and users play by their rules.
Freedom of speech does not exist on social media.
Users don’t have any rights outside the Terms of Service. At any time, platforms can remove your content or ban your account, and you have little-to-no recourse.
If free speech was federally guaranteed on social media, say the government seized Facebook, I’d agree that my idea would then violate freedom of speech. But until this happens, my policy idea isn’t too different from existing FCC regulations.
What About Search Engines?
Although search engines have the same ad-driven revenue business model, combating organic fake news on search engines is a much trickier situation. Widespread legislative or algorithmic changes could ripple far beyond the users who seek out and engage with junk news sites. And while Google or others could deindex junk news sites or use updates like EAT and Core Web Vitals to increase ranking difficulty, the brand-name entities will operate unabated.
The scope and scale of the problem is far greater on search engines than on social media, too.
Google could disregard search neutrality, partner with fact checkers, and then provide hand-chosen rich-text-snippets or answer boxes for topics deep into fake news and junk news terrority, such as conspiracy theories, historical events, and public political statements.
However, I’m not confident Google can ethically manage a challenge of this magnitude. As it stands, Google won’t (or can’t) even remove racial bias and stereotyping from first-page results, Google Image Search, or Google Maps.
Ultimately, what happens to SERP results and fake news will come down to the laws in a user’s country and how search engines shape algorithms around those laws. For example, in France and Germany, online platforms must moderate anti-semitic hate speech, but no such rules exist in the United States.
However, that disparity is slowly shrinking. Some countries are starting to take legislative steps to combat fake news online, particularly on social media.
Fighting Fake News with Legislation
The arguments about restricting or fighting fake news and misinformation are diverse, with legitimate pros and cons on each side.
In 2018, the European Union published its Final report of the High Level Expert Group on Fake News and Online Disinformation. The report evaluates disinformation as a phenomenon in Europe, and draws its conclusion by coordinating with an inclusive, collaborative group of topic experts.
The report encourages governments not to try simple solutions and to avoid any form of censorship, both public and private.
Instead, the group argues that governments should “provide short-term responses to the most pressing problems, longer-term responses to increase societal resilience to disinformation, and a framework for ensuring that the effectiveness of these responses is continuously evaluated, while new evidence-based responses are developed.”
As a long-term play, the report’s suggestions seem like an excellent way to tackle disinformation from the ground-up and creates a smarter, more resilient society.
Here’s the high-level blueprint the EU report recommends.
- “enhance transparency of online news, involving an adequate and privacy-compliant sharing of data about the systems that enable their circulation online;
- promote media and information literacy to counter disinformation and help users navigate the digital media environment;
- develop tools for empowering users and journalists to tackle disinformation and foster a positive engagement with fast-evolving information technologies;
- safeguard the diversity and sustainability of the European news media ecosystem, and
- promote continued research on the impact of disinformation in Europe to evaluate the measures taken by different actors and constantly adjust the necessary responses.”
Sadly, other governments and governing entities use the fake news rallying cry to smother dissent from dissidents, social media users, and journalists—effects that can trickle down to marketers in those countries.
Governments in a few countries, such as Bangladesh, Egypt, Indonesia, and Rwanda, have already used anti-fake news statues to imprison journalists and social media users who criticize the government or local leaders, according to an analysis by Poynter.
Other countries, like Germany, Italy, Singapore, Sweden, and the United States are pursuing various laws and bills that target fake news publishers, distributors, and social media platforms.
Singapore, for example, created a committee to focus exclusively on online disinformation. In 2019, the country’s parliament passed The Protection from Online Falsehoods and Manipulation Act, which is among the most comprehensive anti-misinformation laws in the world.
In the United States, most of the legislative focus has been on preventing unethical marketers and foreign entities from spreading fake news via ads.
In 2017, Congress proposed the Honest Ads Act, which expands disclosure rules to include any online ads that mention a candidate. This legislation also provides clearer language that bans foreign spending on all political advertisements.
The Honest Ads Act attempts to hold advertisers publicly accountable by requiring online ad vendors, including social media platforms like Facebook, to maintain public databases of all online political advertisements, regardless of whether they mention specific candidates, according to an analysis by The Brennan Center for Justice.
“Such a requirement would increase transparency by providing both journalists and the general public visibility into critical information about online ads, such as its target audience, timing, and payment information,” the Brennan Center analysis states. “Similar rules are already in effect for both television and radio companies, which maintain public records of political ad purchases.”
However, the Honest Ads Act contains crucial ambiguities that may undermine its goals and benefits, Guilbeault, the UC Berkeley professor, points out.
The biggest issue with the bill is that it doesn’t account for how fake news via ads spreads most on social media platforms. First, the bill only applies to social media companies with more than 50 million U.S. visitors every month. Second, the bill doesn’t take into account the common disinformation tactic to send users from advertisements to junk news sites, blogs, or forums where they are exposed to more extreme or misleading content, Guilbeault says.
The newest iteration of the bill was reintroduced to Congress in May 2019. The Honest Ads Act, nor any other meaningful legislation, has been passed into federal law.
In 2018, California’s state government also took a crack at anti-fake news legislation that also bolsters media literacy in public schools. The law requires the Department of Education to list instructional materials and resources about how to locate and evaluate trustworthy media.
Whatever anti-fake news legislation eventually passes in each country will undoubtedly affect the algorithms that social media and search engines use.
How Algorithms Stop Fake News
The unfathomable number of content creators, SEOs, advertisers, users, and web developers who publish content daily means that we can’t rely solely on human moderators to protect us from optimized disinformation. Algorithms will inevitably play a crucial role in moderating and controlling the harmful effects of online content.
After legislation comes into play and platforms have no choice but to act, the New America Foundation report argues that platforms will use two types of algorithms to moderate false information and the users who spread it.
- Content-shaping algorithms, which choose the content and paid advertisements each user sees online based on the user’s interests, platform settings, and behavioral data.
- Content moderation algorithms, which detect content that breaks the company’s rules and removes it from the platform.
Companies will need to combine both algorithmic systems to spot fake news, control its spread, and then penalize or restrict the original source. Artificial intelligence and machine learning are how this coordination will come to fruition.
Controlling Fake News with AI
The first step of using AI to combat online disinformation is to build algorithms that place ethics at the forefront of the detection and mitigation systems, Samuel Woolley, an assistant professor at the University of Texas-Austin, argues in an interview with The Economist.
Wolley also argues that the algorithm must include fail-safes, monitors, and systems in place to make sure tools built to help aren’t co-opted to hurt users, which directly affects deplorable marketers who run disinformation ad campaigns or operate junk news websites.
A system such as this would theoretically increase organic content diversity and socioeconomic representation, prevent false ads from getting repeatedly published, and reduce how often users are exposed to optimized disinformation. An ethical algorithm should also ensure that any automatic bans or restrictions on content creators are justified, and limit the number of false positives that need manual review.
However, traditional AI and machine learning development methods make it immensely difficult to create an ethical and effective anti-fake news algorithm.
The common way to train an autonomous algorithm is to provide it with examples that “teach” it how to recognize specific information types. Most of the existing anti-disinformation algorithm models analyze user engagement, user sentiment, biased language, and other psycho-linguistic features, according to a VentureBeat report.
These examples are known as labeled instances. Acquiring enough clear-cut labeled instances in the early phases of development is time consuming, but crucial for the algorithm to identify fake content by itself.
Likewise, the people who provide these labeled instances are also important. Existing AI has an earned reputation for racism, bias, and stereotyping.
For example, content moderation algorithms trained to identify and remove hate speech on social media are more likely to flag content created by African Americans, including posts using slang to discuss contentious events and personal experiences related to racism in America, the New America Foundation report points out.
To achieve Woolley’s first step and ensure content recommendations and moderation is ethical, any anti-fake news algorithms will need to be developed by a diverse, inclusive community where the programmers come from various socioeconomic backgrounds. Given the tech industry’s diversity issues, this step may take a while.
Surprisingly, crowdsourcing may be a way to combat algorithm bias while providing labeled instances.
In the paper Crowdsourcing Judgments of News Source Quality, the authors discovered that crowdsourced judgments about news source trustworthiness are surprisingly accurate and strongly correlates with professional fact-checkers. The bonus aspect of crowdsourcing labeled instances is that programmers can guarantee socioeconomic diversity.
Facebook, Google, Microsoft, and MIT are among the organizations actively working on algorithms to fight online disinformation.
Whose Truth is Real?
Congratulations, you made it to the end of my fake news trilogy!
As I’ve mentioned in this series, truth is a relative concept in a post-truth world. Fake news and optimized disinformation permeate every corner of our industry, and unfortunately, they’re here to stay until political leaders and the private sector decide to coordinate on a long-term solution.
Although marketers alone can’t halt the fake news plague, we can approach our work, methods, and goals with enlightened mindsets to ensure we’re not further destabilizing the social order.
First, pay attention to how you develop marketing strategies, tools, and business models. During development, ask yourself if a nefarious person could misuse these tactics to manipulate or scam others. If such an outcome is possible, implement safeguards or tweak the strategy to falter or slow this perversion, if possible.
Next, remember the subconscious effects of the illusory truth effect and how even one instance of exposure to disinformation can warp somebody’s opinion and beliefs. To help, make sure you respond to user reviews and PR crises with authenticity and facts. Support any product claim or brand value proposition with public evidence, especially if it’s something users or competitors critique.
Last, when you see systems, platforms, or strategies that are used for disinformation but can be improved, speak out. Share those thoughts with people who can make a difference and create change. Afterward, bring the situation into the public’s eye so the marketing community can hold those responsible accountable.
Escaping the post-truth zeitgeist will take time. Working together is the only way we’ll pass through the storm.
That’s really nice post. I appreciate your skills. Thanks for sharing.