🤳 What Ukraine reveals about online misinformation
Researchers are divided on whether falsehoods can be a force for good
🇺🇦 Ukraine’s ascendent social media strategy
⛏ Digging into the ethical quandary of fighting viral lies with other lies
🔢 Better Know A Concept: Dunbar’s number
More than a week into Russia’s invasion of Ukraine, Ukrainians continue to face dire odds on the battlefield. But in online spaces, Ukrainians have seized the upper hand — beating Russian propagandists at their own game. As Drew Harwell and Rachel Lerman write in the Washington Post,
Videos have helped transform local stories of bravery into viral legends — and exposed a war Russia has fought to keep concealed. Ukrainians have posted videos of themselves thwarting tanks, guarding villages, making molotov cocktails and using them to turn Russian vehicles into fireballs.
Laura Edelson, a PhD candidate at NYU studying misinformation, went into more detail in a viral thread. In an email exchange this week, she wrote:
The single biggest lesson we can learn from the first week of the Ukraine invasion is the importance of not leaving an information vacuum. The U.S. government was much more forthcoming than it has been in the past with the public, sharing information about the ground realities in Russia and in Ukraine. This left very little uncertainty [and] space for Russian disinformation to fill. An abundance of factual information makes it much harder for misinformation to take root.
As Edelson notes, less Russian propaganda has spread on English-speaking social media than may have been expected. But there has still been a ton of viral lies and propaganda on platforms in recent days, both from scammers and even official Ukrainian accounts. Here, researcher Abbie Richards breaks down some deceptive tactics that have appeared in Ukraine-themed TikTok videos, including old footage, unrelated content and out of context audio:
In her thread, Edelson suggests that even nonfactual, debunked Ukrainian content is playing a positive role in combating Russian propaganda:
However, some researchers take issue with this last point. Darius Kazemi is a researcher and programmer who experiments with creating bots and small, private social networks. Specifically, he challenges the claim that debunked videos and memes can be a force for good. He tweets that the idea of using “false information as long as it is in the service of something the author agrees with or has already decided is true” is “extremely dubious.”
Can inaccurate, misleading content be positive? It’s a difficult ethical question. Some might see false content posted in support of Ukraine to be a case of the ends justifying the means, or consequentialism appropriate to wartime. But for others, there is no justification for misinformation in any case. In The Nation, Ishmael N. Daro says there’s a “double standard” being employed by Western mainstream media outlets. “After years of warnings about the dangers of misinformation, many Western journalists, public figures, and news consumers are failing to apply their skepticism evenly,” says Daro.
Also, critics argue that adding more falsehoods to an already overwhelming information ecosystem increases the difficulty of accurately understanding what’s happening in Ukraine. Large news organizations now have whole “visual forensics” teams working on sorting out which videos are authentic. Daro writes, “Credulous reporting and unchallenged assumptions about who is and isn’t trustworthy—or who does and doesn’t deserve our compassion—can have major consequences when the stakes are this high.”
Where do you land on this issue? Is there ever a morally justifiable reason to post false or misleading information? What does the Ukraine conflict tell us about misinformation?
Please keep in mind that this was finished on Friday, about a rapidly unfolding situation, and events may have changed considerably before you read this on Sunday.
If a concept keeps coming up, or we think it's a particularly good one that could use a little unpacking, we'll take a closer look here, in "Better Know A Concept."
The background: One concept that comes up a lot in conversations about socializing online is the idea that biologically, humans are only capable of having so many friends. This idea is nearly 30 years old and it originates from the research of Robin Dunbar, now an Emeritus Professor of Evolutionary Psychology at Oxford. Initially, Dunbar noticed a correlation between the sizes of primates’ brains and the sizes of their social groups.
He then applied this logic to humans and concluded that for people, the “natural group size of about 150” is normal. Over the years, Dunbar refined the concept to include different “layers” of friendship. According to Dunbar’s research, most people have about five close, intimate friendships and five more levels of less familiarity. The middle level — classically “Dunbar’s number” — represents the roughly 150 meaningful friendships a person can keep at once (although Dunbar says this typically ranges from 100-250 for most people). At the least familiar end, a person can have as many as 1500 acquaintances.
In action: Dunbar’s number is widely cited, and as Dunbar himself writes, the concept has been put to work in lots of contexts:
The evidence that personal social networks and natural communities approximate 150 in size, characterised by a very distinctive layered structure, has grown considerably in the past decade. We see it in telephone calling networks, Facebook groups, Christmas card lists, military fighting units and online gaming environments. The number holds for church congregations, Anglo-Saxon villages as listed in the Domesday Book and Bronze Age communities associated with stone circles.
But could that all be a coincidence? Along with other landmark TED Talk social psychology concepts like power posing and priming, Dunbar’s number has been repeatedly challenged and cast into doubt. Most recently, researchers at Stockholm University disputed the biological underpinning of Dunbar’s number, saying that the “correlation can disappear when adding more data to statistical models, such as information about other aspects of primate life.” Even diet, they suggest, is a better predictor of brain size than social groups. Dunbar, of course, rejects this analysis with complaints about their statistical models that I’m not well-suited to judge. In response, the Stockholm authors find his rebuttals to be “illustrations of how poorly this approach works.”
In digital public spaces: Some studies of social networks, like this one of Twitter in 2011, seem to provide evidence for Dunbar’s number. Researchers have even found that using Dunbar’s number can be effective in identifying bots on social platforms. But if Dunbar’s number even exists, does it describe relationships on Zoom? How about Tumblr friends we’ve never met in real life? How does growing up with social media, during a pandemic, change the number of friends kids have? There’s a lot we still don’t know about online socialization. Frances Haugen leaked documents to Bloomberg showing small, internal studies at Facebook on users’ self-reported feelings of loneliness and connectedness. The data is messy and conflicted. More research is needed, and platforms should share, or be made to share, their data on how social media affects socialization and friendship.
On Tuesday at 12pm EST, we’re sending out our Open Thread with Robin Sloan, author of Mr. Penumbra’s 24‑Hour Bookstore. Robin will be joining us for the first half hour to answer any questions about the book and his other interests. Especially worth checking out are his newsletter “notes on Web3” and his essay “fish”.
Thinking of Ukrainians,
Photo of Sophia Square by Josh Kramer. Dunbar’s number design courtesy of Wikipedia author JelenaMrkovic via Creative Commons.
New_ Public is a partnership between the Center for Media Engagement at the University of Texas, Austin, and the National Conference on Citizenship, and was incubated by New America.
I think there are times when using social media in a disinformative way is justified. I think the calculus is linked to a risk benefit analysis, if in your analysis not posting the disinformation will result in more harm to the planet, including people, than posting it and breaking with your value system, then it makes sense to post. You have to be in a clear and honest space with yourself and your motives to do this effectively.