This post was originally published on the Data Values Digest.
As the war in Ukraine enters its tenth week, Russian president Vladimir Putin continues to wage an aggressive disinformation campaign aimed at controlling the narrative around his “special military operation.” The Kremlin’s disinformation factory targeted Ukraine long before Russia’s invasion on February 24, claiming that Ukraine had no right to independent statehood, that Ukrainian neo-Nazis were killing Russians, and countless other outlandish and easily-debunked narratives. Today the Kremlin continues to push false narratives, for example, blaming Ukraine for the horrific Bucha massacre.
Those who share and amplify these false narratives are—knowingly or otherwise—victims of a Russian war machine intended to prey on human vulnerability to disinformation at the expense of democracy writ large. The tsunami of disinformation does not start or end with the war in Ukraine—it is also threatening national security, social cohesion, public health, and more, worldwide.
Popular responses to disinformation usually fall back on cutting-edge technological tools at the expense of ethical, human-centered solutions. While artificial intelligence may be effective at identifying disinformation, merely finding and labeling manipulative content is not enough to slow its proliferation. Tech-based solutions are also limited by the necessity of ongoing, quality data needed to sustain these tools and their moving targets, without which there can be dire social consequences. Also concerning is how these technologies can be programmed to disseminate disinformation, including in Ukraine.
Modified algorithms, artificial intelligence, and content labeling raise larger ethical questions around democratic norms of freedom of speech, information, and user agency and dignity. Consider, for example, Facebook’s highly debated news feed algorithm, which prioritizes content based on likelihood of engagement rather than a chronological format. The algorithm aggregates over 10,000 engagement "signals" of a post into a single score, dictating which posts are shown on a user’s feed and when. Not all signals are weighted equally. Indeed, at one point, emoji reactions were valued five times as much as a ‘like’—incentivizing and amplifying emotionally-charged content.
Empowering individuals and communities against disinformation requires an ethical, participatory, and human-centered approach. By equipping people with the skills to recognize and reject false narratives, media and information literacy training fosters individual autonomy, teaching people how to approach and wade through the data and information we encounter every day, rather than treating people as passive actors in the information space.
Research on media and information literacy has yielded promising results: Different types of interventions have been found to reduce the perceived accuracy of false news, without increasing skepticism of legitimate journalism, and to improve knowledge of media bias and the desire to consume quality news. In one study conducted by RAND and IREX (a non-profit organization where both authors work), even brief media literacy interventions helped reduce engagement with Kremlin-sponsored propaganda content. Other studies by IREX have shown media literacy trainings increase confidence and sense of control when navigating online spaces by 41 percent—a measure which indicates a restored sense of agency among information consumers.
In Ukraine, IREX’s flagship media literacy program, Learn to Discern (L2D), scaled and integrated evidence-based media information literacy lessons in 650 secondary schools, reaching over 95,000 high school students. An evaluation of the intervention found that L2D participants were twice as likely to detect hate speech, 18 percent better at identifying fake news stories, and 16 percent better at differentiating between facts and opinions compared to the control group. These results are encouraging, but more investments are needed in public institutions, especially education, to enact sustainable, systems-level inoculation against disinformation.
In a world facing the global expansion of autocratic rule, the influence of information operations and targeted disinformation campaigns are being turbocharged by tech and digital tools. Indeed, Putin has already begun to evolve his tactics in response to the Biden Administration’s decision to preempt the Kremlin’s false narratives.
Keeping pace with this constantly-evolving threat of disinformation requires a-whole-of-society response. Researchers, philanthropists, policymakers, and communities will need to commit additional investment and attention to improving the evidence base of media and information literacy initiatives, funding efforts to scale solutions, and implementing human-centered approaches in programs and interventions.
Building citizen resilience to disinformation is not just a matter of supporting democracy and security—though it is—but a matter of empowering people to navigate their information environments in ways that build trust. Indeed, as trust in public institutions continues to deteriorate globally, disinformation has only become more pernicious as it feeds upon citizen’s legitimate concerns of transparency, intention, and performance by those who lead them.
Democracy and security cannot be sustained if we simply outsource the response to countering disinformation to technology alone. Instead, human-centered approaches must be combined with tech solutions for a holistic and full-scale response. The ways in which we respond to “the month that changed a century” will set the tone and precedence for a future likely to be paved by information warfare.
This content was published on May 4, 2022 and updated on May 4, 2022.