Studying the Real Ramifications of Fake News

In May 2016, a few Texans converged on a new Islamic library in Houston, waving a “#whitelivesmatter” banner. They attended this “Stop Islamization of Texas” rally at the behest of the Facebook page “Heart of Texas,” which claimed, falsely, that the library had received public funding. Purporting to support conservative values, Heart of Texas regularly spread false information, about voter fraud, immigrants, and Hillary Clinton. With 249k likes, the page had a wider reach than the official Texas Republican and Democrat pages combined—a remarkable feat, especially considering that the origins of Heart of Texas lay not in Texas, but all the way in Russia.

Disturbed by news of Russia’s alleged intervention in the 2016 election, I joined Professor Nate Persily’s Fake News and Misinformation Policy Practicum, where a group of students are working with the Hewlett Foundation’s Madison Initiative to find ways to stem the spread of fake news and misinformation. The research I’m involved in focuses specifically on how Russia may have used such fake news to manipulate voters in the 2016 election, as understanding our vulnerabilities will help us formulate better solutions.

Studying the history, we realized that Russia’s use of fake news can be traced back to the Soviet era. During the Cold War, the United States and the Soviet Union engaged in influence operations designed to undermine the political systems of rival countries, known as active measures.

Active measures went beyond just gathering information as espionage; they also entailed the spread of (false) information to influence events and attitudes. The Soviet Union handed out leaflets and planted news stories in friendly newspapers, spreading rumors that the CIA had assassinated Kennedy in the 1960s and that the Pentagon had invented AIDS in the 1980s. But practical limitations largely hampered the Soviets in their disinformation efforts: Pamphlets could reach only so many people, and publishing a news story to reach a wider audience required the cooperation of the gate-keeping American media.

Our research concentrates on how social media helped the Kremlin overcome these difficulties in the 2016 election. Per a March 2, 2017 U.S. counterintelligence report, American intelligence overheard Russian intelligence officers boasting of plans to “pay Clinton back” and “cause chaos in the upcoming U.S. election.” To that end, high-level Russian officials authorized propaganda campaigns on social media to encourage Americans to elect a pro-Russia candidate.

Studying the Real Ramifications of Fake News 1
Sarah Mahmood, JD ’19 (photo by: Savannah Fletcher, JD ‘18)

Russian efforts focused on platforms like Facebook and Twitter. Just this fall, Facebook turned over 3,000 Russian-linked ads, viewed by tens of millions of people and often targeted to swing states, to the House and Senate investigative committees. The ads were designed to exacerbate political divisions in the United States. For example, ads highlighting Muslim support for Clinton were targeted to those users who had conducted anti-Islamic searches in the past. In creating profiles like Heart of Texas, Russian sources didn’t just impersonate conservatives. Facebook further discovered Russian origins behind pages like “Blacktivist,” which had urged followers to take a stand against police brutality by voting for candidate Jill Stein. With 360k likes, Blacktivist had a greater Facebook presence than the verified page for Black Lives Matter. In this way, fake news sources dominated social media. On Facebook, the top-performing fake news election stories received more engagement than the top stories from major news outlets, including the New York Times, Washington Post, and Huffington Post. On Twitter, too, users shared more inaccurate, polarizing, and conspiratorial content than legitimate news sources in the lead-up to the election, with starker ratios in some swing states.

Social bots, automated software applications that run social media accounts, likely helped spread Russian propaganda. While their origins can’t be determined with total certainty, many pro-Trump bots have been tied to Russia because of their posts referencing Russian news outlets. In fact, during the Republican primary, some Twitter accounts that attacked Senator Ted Cruz also tweeted out Russian memes. Tellingly, some pro-Trump bots later pivoted to tweeting about elections in France and Germany, just as pro-Leave bots after Brexit had switched to tweeting pro-Trump messages. Such transiency suggests that the bots have foreign roots, used by the foreign actor in the context that’s most suitable at that moment.

Ultimately, to contend with the impact of such fake news, successful measures must target three areas: (1) information production, (2) information distributors, and (3) information consumption.

To improve information production, mainstream journalists need to know how to produce accurate, informational, and engaging content. Then, information distributors need to take steps to ensure the accuracy of the information shared on their platforms. Social media, which now largely control information distribution, must more aggressively vet accounts for authenticity. Platforms should also consider algorithmic changes to down-rank false stories, better identify authentic content, and take measures to inhibit posts from going viral before they are fact-checked. Despite best efforts, some inaccurate content will inevitably be posted; therefore, built-in fact-checking functions are also needed. Google and Facebook are already investing in fact-checking tools for their users. Social media companies should also consider creating a database of information consumer reports, using independent assessors to score news outlets on their accuracy and orientation and then have that score appear next to search results and social media streams.

To be completely successful, though, citizens must learn how to better assess content accuracy themselves. After all, fact-checkers can be only so effective when much of the public may not even trust mainstream sources. Thus, greater investments might be made in news literacy and civic education programs in school; exams like the SAT can be redeveloped to test basic civic knowledge and media literacy. The United States can also look toward European countries, which in response to similar Russian influence operations updated their school curriculums with lessons about common values and national heritage, to make their citizens less susceptible to divisive fake news stories.

The ease with which the Kremlin exploited social rifts in America to further its agenda is troubling. The digital era has enabled such active measures to be more successful than ever.

Thus, social media platforms need to start seriously contending with their anti-democratic potential. The U.S. government should also take steps to promote media literacy and civic education. Most vitally, I feel that American society must find some way to heal its fractures or remain vulnerable to foreign manipulation. SL

Sarah Mahmood is a second-year law student from New York. Before law school, she worked at CNN’s Anderson Cooper 360. Sarah graduated from Wellesley College with a degree in political science and plans to practice political law.