𤳠What Ukraine reveals about online misinformation
Researchers are divided on whether falsehoods can be a force for good
đşđŚ Ukraineâs ascendent social media strategy
â Digging into the ethical quandary of fighting viral lies with other lies
đ˘ Better Know A Concept: Dunbarâs number
More than a week into Russiaâs invasion of Ukraine, Ukrainians continue to face dire odds on the battlefield. But in online spaces, Ukrainians have seized the upper hand â beating Russian propagandists at their own game. As Drew Harwell and Rachel Lerman write in the Washington Post,
Videos have helped transform local stories of bravery into viral legends â and exposed a war Russia has fought to keep concealed. Ukrainians have posted videos of themselves thwarting tanks, guarding villages, making molotov cocktails and using them to turn Russian vehicles into fireballs.
Laura Edelson, a PhD candidate at NYU studying misinformation, went into more detail in a viral thread. In an email exchange this week, she wrote:
The single biggest lesson we can learn from the first week of the Ukraine invasion is the importance of not leaving an information vacuum. The U.S. government was much more forthcoming than it has been in the past with the public, sharing information about the ground realities in Russia and in Ukraine. This left very little uncertainty [and] space for Russian disinformation to fill. An abundance of factual information makes it much harder for misinformation to take root.
As Edelson notes, less Russian propaganda has spread on English-speaking social media than may have been expected. But there has still been a ton of viral lies and propaganda on platforms in recent days, both from scammers and even official Ukrainian accounts. Here, researcher Abbie Richards breaks down some deceptive tactics that have appeared in Ukraine-themed TikTok videos, including old footage, unrelated content and out of context audio:
In her thread, Edelson suggests that even nonfactual, debunked Ukrainian content is playing a positive role in combating Russian propaganda:
However, some researchers take issue with this last point. Darius Kazemi is a researcher and programmer who experiments with creating bots and small, private social networks. Specifically, he challenges the claim that debunked videos and memes can be a force for good. He tweets that the idea of using âfalse information as long as it is in the service of something the author agrees with or has already decided is trueâ is âextremely dubious.âÂ
Can inaccurate, misleading content be positive? Itâs a difficult ethical question. Some might see false content posted in support of Ukraine to be a case of the ends justifying the means, or consequentialism appropriate to wartime. But for others, there is no justification for misinformation in any case. In The Nation, Ishmael N. Daro says thereâs a âdouble standardâ being employed by Western mainstream media outlets. âAfter years of warnings about the dangers of misinformation, many Western journalists, public figures, and news consumers are failing to apply their skepticism evenly,â says Daro.
Also, critics argue that adding more falsehoods to an already overwhelming information ecosystem increases the difficulty of accurately understanding whatâs happening in Ukraine. Large news organizations now have whole âvisual forensicsâ teams working on sorting out which videos are authentic. Daro writes, âCredulous reporting and unchallenged assumptions about who is and isnât trustworthyâor who does and doesnât deserve our compassionâcan have major consequences when the stakes are this high.â
Where do you land on this issue? Is there ever a morally justifiable reason to post false or misleading information? What does the Ukraine conflict tell us about misinformation?
Please keep in mind that this was finished on Friday, about a rapidly unfolding situation, and events may have changed considerably before you read this on Sunday.
If a concept keeps coming up, or we think it's a particularly good one that could use a little unpacking, we'll take a closer look here, in "Better Know A Concept."
Dunbar Number
The background: One concept that comes up a lot in conversations about socializing online is the idea that biologically, humans are only capable of having so many friends. This idea is nearly 30 years old and it originates from the research of Robin Dunbar, now an Emeritus Professor of Evolutionary Psychology at Oxford. Initially, Dunbar noticed a correlation between the sizes of primatesâ brains and the sizes of their social groups.
He then applied this logic to humans and concluded that for people, the ânatural group size of about 150â is normal. Over the years, Dunbar refined the concept to include different âlayersâ of friendship. According to Dunbarâs research, most people have about five close, intimate friendships and five more levels of less familiarity. The middle level â classically âDunbarâs numberâ â represents the roughly 150 meaningful friendships a person can keep at once (although Dunbar says this typically ranges from 100-250 for most people). At the least familiar end, a person can have as many as 1500 acquaintances.
In action: Dunbarâs number is widely cited, and as Dunbar himself writes, the concept has been put to work in lots of contexts:Â
The evidence that personal social networks and natural communities approximate 150 in size, characterised by a very distinctive layered structure, has grown considerably in the past decade. We see it in telephone calling networks, Facebook groups, Christmas card lists, military fighting units and online gaming environments. The number holds for church congregations, Anglo-Saxon villages as listed in the Domesday Book and Bronze Age communities associated with stone circles.
But could that all be a coincidence? Along with other landmark TED Talk social psychology concepts like power posing and priming, Dunbarâs number has been repeatedly challenged and cast into doubt. Most recently, researchers at Stockholm University disputed the biological underpinning of Dunbarâs number, saying that the âcorrelation can disappear when adding more data to statistical models, such as information about other aspects of primate life.â Even diet, they suggest, is a better predictor of brain size than social groups. Dunbar, of course, rejects this analysis with complaints about their statistical models that Iâm not well-suited to judge. In response, the Stockholm authors find his rebuttals to be âillustrations of how poorly this approach works.â
In digital public spaces: Some studies of social networks, like this one of Twitter in 2011, seem to provide evidence for Dunbarâs number. Researchers have even found that using Dunbarâs number can be effective in identifying bots on social platforms. But if Dunbarâs number even exists, does it describe relationships on Zoom? How about Tumblr friends weâve never met in real life? How does growing up with social media, during a pandemic, change the number of friends kids have? Thereâs a lot we still donât know about online socialization. Frances Haugen leaked documents to Bloomberg showing small, internal studies at Facebook on usersâ self-reported feelings of loneliness and connectedness. The data is messy and conflicted. More research is needed, and platforms should share, or be made to share, their data on how social media affects socialization and friendship.
Open Thread
On Tuesday at 12pm EST, weâre sending out our Open Thread with Robin Sloan, author of Mr. Penumbraâs 24âHour Bookstore. Robin will be joining us for the first half hour to answer any questions about the book and his other interests. Especially worth checking out are his newsletter ânotes on Web3â and his essay âfishâ.
Thinking of Ukrainians,
Josh
Photo of Sophia Square by Josh Kramer. Dunbarâs number design courtesy of Wikipedia author JelenaMrkovic via Creative Commons.
New_ Public is a partnership between the Center for Media Engagement at the University of Texas, Austin, and the National Conference on Citizenship, and was incubated by New America.
𤳠What Ukraine reveals about online misinformation
I think there are times when using social media in a disinformative way is justified. I think the calculus is linked to a risk benefit analysis, if in your analysis not posting the disinformation will result in more harm to the planet, including people, than posting it and breaking with your value system, then it makes sense to post. You have to be in a clear and honest space with yourself and your motives to do this effectively.