In announcing the news from the Oxford Dictionary that “post truth” is 2016's word of the year, The Chicago Tribune declared that “Truth is dead. Facts are passé.” 1 Politicians have shoveled this mantra our direction for centuries, but during this past presidential election, they really rubbed our collective faces in it . To be fair, the word “post” isn't to be taken to mean “after,” as in its normal sense, but rather as “irrelevant.” Careful observers of the recent US political campaigns came to appreciate this difference. Candidates spewed streams of rhetorical effluent that didn't even pretend to pass the most perfunctory fact-checking smell test. As the Tribune noted, far too many voters either didn't notice or didn't care.
That said, recognizing an unwelcome phenomenon isn't the same as legitimizing it, and now the Oxford Dictionary has gone too far toward the latter. They say “post-truth”—as 2016's word of the year—captures the “ethos, mood or preoccupations of  to have lasting potential as a word of cultural significance.” 1 I emphatically disagree. I don't know what post truth did capture, but it didn't capture that. We need a phrase for the 2016 mood that's a better fit. I propose the term “gaudy facts,” for it emphasizes the garish and tawdry nature of the recent political dialog. Further, “gaudy facts” has the advantage of avoiding the word truth altogether, since there's precious little of that in political discourse anyway. I think our new term best captures the ethos and mood of today's political delusionists. There is no ground truth data in sight, all claims are imaginary and unsupported without pretense of reality, and distortion is reality. This seems to fit our present experience well.
The only tangible remnant of reality that isn't subsumed under our new term is the speakers' underlying narcissism, but at least we're closer than we were with “post truth.” We need to forever banish the association of the word “truth” with “politics”—these two terms just don't play well with one another.
There's been a lot of discussion lately about the ubiquity of fake news. Craig Silverman of BuzzFeed reported that fake election news outperformed real news on Facebook in the final months of the 2016 presidential election. 2 He wrote that phony news “engagements”—which he defines as the total number of Facebook shares, reactions, and comments—with readers were 20 percent higher than mainstream news by election day. According to Silverman, the five leading fake stories on Facebook in the three months before the election were”
- “Pope Francis Shakes World, Endorses Donald Trump” (960,000 engagements; Ending the Fed [Facebook])
- “WikiLeaks Confirms Hillary Sold Weapons to ISIS” (790,000; The Political Insider [Facebook])
- “It's Over: Hillary's ISIS Email Just Leaked & It's Worse Than Anyone Could Have Imagined” (754,000; Ending the Fed [Facebook])
- “Just Read the Law: Hillary Is Disqualified from Holding Any Federal Office” (701,000; Ending the Fed [Facebook])
- “FBI Agent Suspected in Hillary Email Leaks Found Dead in Apparent Murder-Suicide” (567,000; Denver Guardian [Facebook])
Of course some of these stories were so widely discredited that they were removed. But like radioactive elements it would be a mistake to underestimate the effect of their half-lives.
Silverman also lists the five leading mainstream news stories engaged by Facebook users, which share three common themes: they were from recognizable media outlets ( The Washington Post , Huffington Post , The New York Post , CNN , The New York Times ), they weren't critical of Hillary Clinton for the most part, and they were true. Silverman has performed yeoman's service in calling our attention to these statistics.
This is where the plot thickens. Mark Zuckerberg said the thought that fake news on Facebook might influence elections was a “pretty crazy idea.” 3 However, recent research suggests otherwise. Techno-sociologist Zeynep Tufekci suggests that it's a mistake to think that self-propagated nonsense can be dismissed as uninfluential. 4 Indeed, if self-propagated nonsense were dismissible we'd have no concept of herd mentality or groupthink, the public relations industry would be nonexistent, and the writings of Aldous Huxley and George Orwell would have fallen stillborn from the presses. Despite Zuckerman's protestation, social media channels share some responsibility for the effects of the 2016 political misinformation cycles. At least at the national level, these nonsense stories were heavily biased in favor of extreme agendas and were, for the most part, vicious, partisan, and targeted. Although the actual effect of social-network echo chambers might never be completely understood, it most certainly had some sway in the election—and it likely wasn't positive from the point of view of free and fair elections. 5
Here we concern ourselves exclusively with the nature and distribution mechanisms of message propagation, and not the messages themselves. This is important to note because social media distribution channels have been used in aid of polarizing issues like white supremacy, homophobia, antimulticulturism, ethnonationalism, antisemitism, racism, and sundry denialist agendas. However, in this column we'll focus on the media rather than the messages.
Contemporary analysis of fake news largely misses the mark for several reasons that I'll explain. But first let's come to some sort of agreement on what constitutes fake news and where it comes from.
First off, let's distinguish fake news from lookalike social satire sources like The Onion (theonion.com) and The Daily Currant (dailycurrant.com). If these outlets run a report stating that a national political figure has claimed that falafel is a gateway food to terrorism, for example, it's satire. If anyone takes this sort of report seriously, that's more an indictment of our educational system than of the individual. Satire, like all literary and art forms, favors a prepared mind. Incidentally, Steve Bogira's Chicago Reader post speaks cleverly and satirically to this issue. 6
That said, we can't make light of problems arising from the almost-invisible line between political satire and fractious partisanship. But to claim that one person's satire is another person's conspiracy theory is far too simplistic. Although tedious, the marginal cases have to be resolved by appealing to context and detail. For example, consider the recent bogus report of the pizzeria-based child-abuse ring, which the press referred to as Pizzagate , connected to a presidential candidate (www.snopes.com/pizzagate-conspiracy). By the time the community of partisans behind this fiction was banned by Reddit (www.reddit.com/r/pizzagate/comments/5da0kp/comet_ping_pong_pizzagate_summary), the story already had legs. And despite efforts by Snopes.com and other debunking sites that investigate conspiracy theories, cyburban myths, and other content-free claims such as this, this bogus story remains alive at this writing (www.cnn.com/2016/12/06/politics/trump-transition-michael-flynn-conspiracy-theories/index.html). Unfortunately, willfully partisan and unreflective people ignore debunking services on principle, and gullible people don't feel the need to use them. Those who are unwilling or unable to invest time to remain informed of the facts of current issues need some help.
From satirical sources, we'll now turn to an analysis of current weapons-grade fake news sources. The first notable distinction is the degree to which the source tries to conceal authorship and responsibility for content. Disclosed sources are more open about their responsibility for content. Examples of these include nationalreport.net, endingthefed.com, thepoliticalinsider.com, alternative-right.blogspot.com, breitbart.com, abcnews.com.co (which is not to be confused with the ABC network website, abcnews.go.com), and so on. We can verify the source by looking up the American Registry for Internet Numbers (ARIN) information on these domain names, investigating corporate filings, and in some cases even collecting background or publishing history on individual authors. This is to be contrasted with anonymous sources, which are usually concealed within a newsgroup or bulletin board. Many anonymous websites are called “image boards” owing to their origins as digital media posting sites. The random boards such as 4 Chans' “/b/” quickly expanded into anonymous free-for-all bulletin boards that championed the principle of unaccountable free expression and the act of trolling for social lurkers while hiding behind pseudonyms, avatars, relative identity daemons, and so on.
In some cases, the cloak of anonymity is used to protect legitimate free speech that's critical of dangerous adversaries, as seems to be the case with Anonymous' Project Chanology cyberattack on the Church of Scientology (www.wired.com/2008/01/anonymous-attac). In other cases, however, the anonymity encourages libel, slander, defamation, and lies. Though disclosed and anonymous sources are capable of disseminating distasteful content, disclosed sources are shielded by the First Amendment, whereas anonymous sources are hiding behind secrecy and the resultant lack of accountability. As distasteful as it might be, it's a mistake to try to regulate or curtail it for both free speech and practical reasons. To paraphrase the immortal bard, If anonymizing be the price of free speech, blog on .
Disclosed fake news sources are akin to “mainsleaze spam” (mainsleaze.spambouncer.org/what-is-mainsleaze-spam/): it's easy to recognize but no less annoying than sources less open. sources. is like regular spam—it's annoying but out in the open. On the other hand, bogus sources are truly covert operations. These are sources that seem authentic, but aren't. The “Denver Guardian” is typical in this regard; it's claimed to be Denver's oldest news source, but this bogus news source is best known for its story that allegedly tied an FBI agent investigating Hillary Clinton's email to a murder–suicide during the recent presidential election. That this news source is illegitimate and the news story was a total fabrication didn't in any way inhibit viral propagation via Facebook. 7 Bogus sources frequently use social media to relay false rumors, slanderous stories, and partisan propaganda, because they offer extensive reach and minimal filtering. Like all fake news, such partisan misinformation is guaranteed to be unverifiable.
For completeness, we'll also include relay sources such as those in Macedonia that have no ideological bias but just aggregate fake news for profit. 8 Relay sources aren't the cause of the problem under consideration, but they do exacerbate it.
So there you have it, fake news characterized by source: disclosed, anonymous, and bogus . At this point we're at the proverbial fork in the road. Legitimate journalists and scholars adjudge fake news as unworthy, unreliable, and tribalist. However, the people behind these stories regard them as a legitimate exercise of their First Amendment rights (it's unclear to what extent they believe what they post and publish). Although the journalists and scholars appear to be on secure footing, I submit that there's little to gain from a debate about whether fake news falls within First Amendment protections or whether these sources actually believe the stories they're posting. I suggest we skip the philosophical debates and deal with fake news at a technological level. Given the constant stream of digital effluent, the challenge is to find the necessary techniques to divert the most offensive parts from our immediate view. I illustrate this point by reference to some proposed solutions.
Michael Rosenblum thinks that media outlets should “restructure themselves to reflect the new reality of a free press, something that they have, in truth, never really been confronted with before.” 9 This approach is a nonstarter because it raises the question of whether the Facebook misinformation mania constitutes a free press in the first place. There's nothing about fake news to indicate that it involves free press issues, and hence there's nothing for media outlets to address. No putative news organization can compete with fake news in terms of sheer outrageousness and shock appeal. The success of supermarket tabloids is proof of that. New York Times executive editor Dean Baquet takes a more moderate approach: “We need to devise more ways to go after it [fake news], write about it and take it down. We need to make clear that it's not true.” 10 Both Rosenblum and Baquet miss the essential point. This isn't a supply-side problem; we need to accept the fact that there's no way to enforce fake-news hygiene—any filtration rests with the consumer.
Washington Post journalist Glenn Kessler offers a more reasonable alternative. 11 Before reading, he recommends we
- authenticate the source (host),
- check out the bona fides of the “contact us” page, and
- vet the author.
If 1 through 3 check out, then we should
- avoid suspending any disbelief when reading the article (a background in informal logic and rhetorical fallacies will serve you well in this regard),
- peruse any associated advertising and links,
- double-check any cited references, and
- use search engine results to verify.
On this last point, Kessler recommends the “Snopes Field Guide to Fake News Sites and Hoax Purveyors” (www.snopes.com/2016/01/14/fake-news-sites/), and the interactive Real or Satire? website (realorsatire.com), which checks URLs against a user-driven blacklist.
Kessler's recommendations are spot on. It's a shame that we live in a world where they're necessary, but no one promised us a rose garden.
Kessler recognizes that the problem is demand-side and not supply-side. The notion that we could embarrass “fake newsies” into silence is absurd. Theories of cognitive dissonance and belief disconfirmation explain why. 12 Digital jihadists have mastered the care and feeding of tribalists. Even if every fake news source were blocked or filtered (a ridiculous impossibility to even consider), the tribalists would still remain galvanized by samizdat efforts. This has always been the epic fail in any kind of news remediation or filtering—attempts to discredit or withdraw cyburban mythers' or deniers' writings from public consumption only serve to confirm for them that their theories were in fact onto something.
Instead, amelioration efforts must be directed toward unwitting victims of misinformation. Unlike the tribalists, there's hope for them. However, Kessler's story-forensics approach suffers from two deficiencies. First, it comes too late in the information exchange. Assertions in the news should be flagged before the link is clicked, not just before the story is read. Second, there's too much overhead. This pocket-card approach is likely to be too much of a burden for most of the public. This problem calls for technical preemption.
Our challenge is technical, not semantic. Journalists and domain experts can easily spot fake news. FactCheck.org (www.factcheck.org), the Politifact Truth-o-Meter (www.politifact.com/truth-o-meter/statements), and the Washington Post fact-checker (www.washingtonpost.com/news/fact-checker) are all credible online resources. Other XML-based aggregators such as Protopage (www.protopage.com) are also useful. However, they share the same defect as Kessler's solution. We need an even higher-tech solution that's transparent to the user/reader and runs unobtrusively in the background until needed. We'll call our proposed solution the Interactive Gaudy-Fact Crap-Detector (IGFCD).
Here's the idea. Currently, all types of fake news rely on social media to drive traffic to their websites. We propose a meta-level crap-detecting engine in the form of an add-on or app that provides a reliability estimate for the source of any news link . Of course, it must be tunable and voluntarily downloaded. It would work in much the same way as spam-filtering, importance-ranking , virus-detecting email clients and anti-virus programs use black and white lists. The effect is to spot scurrilous links and, if called upon, provide commentary, statistics, and references indicating why the source was blacklisted or given a low reliability estimate. Additionally, the app could also include domain registry data and links to related wiki entries. The objective is to provide reliability estimates that empower the user to make informed choices about whether to follow news links.
Dynamic, back-end databases that are open to public inspection will serve as aggregators of blacklisting/whitelisting services in a manner similar to that of MxToolbox (mxtoolbox.com/problem/blacklist). The fact-checking services mentioned above would also be good candidates for inclusion. Buzzfeed's Craig Silverman has a spreadsheet of approximately 150 fake news websites that are also prime candidates 2 The important thing to remember is that the proposed app would be a news segregator as opposed to both news feeds (e.g., NewsBlur, Protopage Yahoo, Protopage, RSS feeds, etc.) and news aggregators (Google News, Huffington Post, Daily Beast). IGFCD is designed to keep undesirable bunk out of the user's face. It would work a t in the background and integrate seamlessly with all types of online content. with or without independent feed filtering. When links appear in a browser, the pointer or cursor triggers a pop-up that provides the relevant background information. Once the basic system is created, it would be expanded to include individual files indexed by DOI..
The IGFCD is inherently nonalgorithmic, as the sites and articles have to be evaluated by seasoned scholars and journalists to be determined as fakes. If an algorithm were available, the problem wouldn't exist in the first place, as the social media sites could have blocked access before the fake news became clickbait. Further, this isn't the type of thing that we could crowdsource—unless the crowds were limited to qualified journalists and scholars. This problem calls for better opinions, not more of them. It goes without saying that a project like this must be completely open to public inspection—although perhaps with a minimal time delay to prevent bogus newsies from playing leapfrog with the URL blacklists. There's also no disincentive for fake newsies to develop their own competitive blacklist/whitelist enginesIn effect, they're doing this already with social media shares . The add-on or app would be transparent to the databases: the user would simply choose the channels that are deemed most reliable
The IGFCD provides a convenient unobtrusive barrier to the many irritating and offensive tribalist shibboleths that compete for our attention in the media ecosystem, and is likely to be appreciated by those who choose to take their truth unflavored—such as the casual reader who lacks the time and inclination to do investigative journalism on fake news. At a time when facts don't matter but memes do, we must face the memes with technology.
T he search for a digital or legal isogloss for fake news is both practically impossible and socially undesirable. However, by tackling this as a demand-side problem and providing users/readers with technical assistance, we can more effectively filter out and diminish the impact of lies. Of course, the bigger problem is that our educational system fails to include critical analysis in the early curriculum, and there is little emphasis on these concepts throughout secondary school. The received view seems to remain that elementary and secondary schools are where our young people are inculcated with the “truths” that make them strong in body, mind, and spirit. The downside of this approach is of course that our children pretty much think alike, and thus, they are left unprepared for and defenseless against the fanaticism, extremism, superstition, and deceit that the modern world presents. Analysis of primary and secondary school textbooks will confirm that children are accustomed to accepting misinformation without question, 13 so is it any wonder we have a fake news problem?
Of course, fake news has been with us as long as there has been news. In fact, our history has long been subjected to hucksterism, with hoaxes including the Donation of Constantine, the Kensington Runestone, Piltdown man, Clifford Irving's book on Howard Hughes. How much more advanced would the world be today if Paul Mellon had invested in crap-detection technology rather than the Vinland Map? There's a lesson here for modern philanthropists who care about the integrity of elections and the survivability of democracy.
Well, I've done the heavy lifting. Now it's time for you developers out there to rush to the finish line.