We are living in the disinformation age. Whether in the form of fabricated articles, social media ads, or entire news websites, disinformation serves an unsettling purpose: spreading false information with the deliberate intention to deceive.
Disinformation has become a part of our everyday lives, but it can be difficult to identify. Modern propaganda techniques have made it easier than ever to fake legitimate information. However, by understanding how disinformation spreads, we just might have a chance to stop it.
Making Sense of Misinformation and Disinformation
In the midst of a pivotal election or global pandemic, societies are desperate for the latest information. Timely and accurate data helps inform decisions and keep people safe. But in 2020, false information spread like a virus.
This phenomenon took place through two primary modes: misinformation and disinformation. The important distinction between the two is that misinformation doesn’t include the intent to mislead, while disinformation does. The danger comes when individuals, organizations, or governments deliberately use misinformation to disinform a society.
Modern use of the term “disinformation” dates back to the 1920s in the Soviet Union. Joseph Stalin coined the term dezinformatsiya to sound French, in an attempt to frame the word as if it had originated in the West.
From false claims that the U.S. invented AIDS to fake documents reporting that the U.S. supported Apartheid, the Soviets continued to weaponize information throughout the 20th century.
Later, “disinformation” made its appearance in U.S. dictionaries after the Reagan administration’s deceptive campaign against Muammar Gadhafi in 1986 in Libya. The campaign involved American news media outlets falsely reporting that Gadhafi was in jeopardy of being attacked by U.S. bombers, or perhaps overthrown by a coup.
Disinformation in the 21st Century
In the digital age, disinformation isn’t reserved exclusively for powerful political leaders with enough resources. With the emergence of social media and artificial intelligence, practically any internet user has the capability of spoon-feeding falsehood to the masses on any given day.
In July 2016, for example, the fake news site WTOE 5 News falsely claimed that Pope Francis had endorsed Donald Trump. The website, parading itself as a trustworthy local news outlet, generated nearly one million Facebook engagements with the title: “Pope Francis Shocks World, Endorses Donald Trump for President.”
Later that same year, a site masking itself as ABC News under the domain “abcnews.co” procured two million Facebook engagements with the headline: “Obama Signs Executive Order Banning The Pledge of Allegiance In Schools Nationwide.” The story was debunked, but not before thousands were convinced otherwise.
While these stories were undoubtedly crafted with malicious intent, they could’ve been composed by computers. Rapid disinformation attacks that make use of artificial intelligence are beginning to bombard social media platforms with fake news articles. This takes much of the manual labor out of the process for those dedicated to the intentional spread of false information, making the viral potential of fake news even more streamlined.
Governments also play a prevalent role in the spread of disinformation. Evidence has been found that agencies in Iran, China, and Russia have used fake news to achieve a variety of outcomes related to the coronavirus, election results, and more.
Disinformation capitalizes on the provocation of strong emotions, and none of us are immune to its effects. There is big money and power behind these campaigns — making each and every click high stakes.
How Can Disinformation Be Stopped?
Organizations around the world have been teaming up to put an end to the spread of disinformation. Projects like First Draft News work to empower individuals with resources that help build resilience against harmful and misleading information.
Other advocate organizations such as Digital Action and the World Wide Web Foundation work to protect democratic rights to quality information through policy change and collective action. The United Nations has even gotten in on the movement, asking people to “pause before sharing” potentially dangerous information on the web.
To successfully combat the troll farms, deepfakes, and phoney headlines that bombard internet users on a daily basis, understanding where it all originates and how it’s disseminated is key. Camille François, the chief innovation officer at a company called Graphika, is at the forefront of this fight.
François’ work focuses on employing machine learning to detect disinformation campaigns before they take hold. Her team at Graphika has partnered with clients like Facebook, Google, human rights organizations, and public institutions like the U.S. Senate Select Committee on Intelligence, to provide these services.
Their operation is highly dependent on distinguishing between normal and suspicious patterns. In most cases, organic conversations are spontaneous and bizarre. Communities like troll farms have a hard time replicating their temporal, semantic, and network diversity.
“We call ourselves the cartographers of the internet,” says François. “Sometimes we map a conversation and we see a normal set of communities engaging with one another, and in the middle, a very dense and tight set of accounts that were all created on the same day and forming a tight little ball. We investigate, and it turns out to be a botnet.”
The team’s unique approach involves detailed mapping of the systems designed to manipulate online conversations. Their mission is to expose the activity before it creates large-scale chaos, but it’s no easy task. Each disinformation campaign tends to be diverse. They’re individual ecosystems manufactured by different actors using their own favorite tactics.
In her early thirties, François is taking on the issue of a lifetime. She’s at the leading edge of what could prove to be one of the most consequential crusades the world has ever seen — and she’s doing it by reverse engineering the same technologies used to spread disinformation.
While it’s a high mountain to climb considering the sheer volume of content on the internet, François still believes social technology can be a force for good.
“I really still believe that new technology actually can make the world a better place,” she says. “I am interested in confronting what could make this dream go wrong, and ensuring that I can confront the darkest corners of the internet so that we can preserve the potential of the internet for democracy.”