This week we have three smaller features united by a common theme: platform responsibility for the common good. This is a huge subject because there are many areas in which we, the public, expect the private companies that run social media networks to be responsible actors. Let’s assess: What are some areas of responsibility? How are they doing in each area? What could be improved?
This week, we’re mainly interested in these platform responsibilities:
📏 Neutral application of rules
🪟 Transparency and accountability
🙆🏽♀️ Putting public interests ahead
For Tech Policy Press’ Sunday Show, Justin Hendrix interviewed evelyn douek, doctoral candidate at Harvard Law School, Senior Research Fellow at the Knight First Amendment Institute at Columbia University, and incoming Assistant Professor at Stanford Law this fall. douek has done a lot of fascinating thinking and writing about content moderation. Here, Hendrix asks what she thinks moderation, and the law regulating it, will look like in a decade. douek rejects the “individual rights model,” where laws require platforms to let users appeal a moderation decision, as false and ineffective.
Instead, she says, it’s more important to think about the big, structural “decisions the platforms make that create that universe.” In other spheres, like banking and government, the law expects internal separation to prevent inappropriate interference and conflicts of interest. Don’t social media companies have a similar responsibility?
Separation of functions is one of the things that I talk about. That looks at, for example, I think one of the biggest discontents that people have with content moderation at the moment is the platforms say they have these rules, and we’re not necessarily sure that they’re applying them in practice. We have no idea whether they are actually interested in the neutral application of rules.
Indeed, based on a lot of reporting, in particular with respect to Facebook and in the Facebook files, it’s come to light that the content moderation team, the content moderators would come to a certain decision with respect to some content and another arm of the company will interfere with the enforcement decision. Whether it’s a government lobbyist or growth teams who are interested in their own incentive structure. They’re responding to their own incentives, but they’re not aligned with the application of content moderation rules.
You could think about correcting that ex-post. You could think, ‘Okay, well, they’ve interfered. There’s bias in that enforcement decision. The decision’s wrong. Let’s give them an appeal. Let’s give them more notice and we’ll fix it on appeal,’ but that just seems both slow and ineffective. You’re only going to have an arbitrary pool of the kinds of people that will go through that process of appeal.
I think it’s more effective to try and prevent that bias from infecting the process upstream ex-ante and say, ‘Look, let’s just put a wall between those decision makers who are responding to certain kinds of incentives and those decision makers that are responding to the incentive to enforce content moderation rules.’ We see that in lots of areas of the law and lots of regulation. We see that in banks. We see that in the administrative state where we say, ‘You can’t have people who have one kind of incentives interfering in the enforcement decisions of another of the business.’
– evelyn douek
Occasionally, we’ll see an article or website that fits perfectly with something we’ve written about previously in the newsletter. In this feature, we revisit a newsletter essay and offer some additional thoughts.
Facebook on automated moderation
The setup: Human moderation is an ongoing difficulty for most of the large social platforms. It is expensive, publicly embarrassing, and often quite psychologically harmful to the people who do it. Recently, two former TikTok mods sought to expand their lawsuit into a class action, on the heels of a recent $52 million Facebook settlement with moderators. The companies seem to desperately want automated systems to replace human mods, or at least lighten the load on them by pre-filtering out or demoting harmful, illegal, or low-quality content.
The newsletter: As we wrote about in January, a few platforms seized on the pandemic as an opportunity to accelerate the adoption of automated moderation systems. Unfortunately, these systems do not reliably work as well as anyone would like, both in individual cases and systematically across whole platforms. Even working as intended, automated moderation and downranking misses highly contextual content like bullying, and is overly sensitive on other topics, like reproductive health. Currently, the larger social media platforms are all using a combination of machine learning-powered moderation algorithms and human workers, but they are reluctant to reveal too many of the details. Here’s a more technical analysis of how these systems work.
The TWIST: Facebook’s Oversight Board decided to take on a “case” about a breast cancer awareness post that had been flagged by an automated system and removed. The Board ultimately recommended increased transparency and accountability for these systems. Since the recommendation, Facebook has been testing alerts that would begin to do that. Here’s what they wrote (page 17) in their most recent update on the Oversight Board:
From September through November 2021, we ran an experiment for a small set of people in which we informed them whether automation or human review led to their content being taken down. We analyzed how this affected people’s experiences (such as whether the process was fair) and behaviors (such as whether the number of appeals or subsequent violations decreased).
We’ve decided to proceed with launching this new user messaging. We will start in certain locations to further understand the effects of this message for different policies and user groups, and iterate on our design to ensure user comprehension. We will continue to monitor how this affects people’s experiences to ensure that the goals of this recommendation are being achieved, and brief the board on our findings.
The crux: I have serious reservations about the efficacy of having an Oversight Board at all, and I’m unsure about how to think about its faux-legal, semi-binding decisions. But in this case, it’s interesting to see how their recommendation is being heeded by Meta, and I applaud the attempt to incorporate a little friction and humanity into the moderation process. It’s worth noting that we have no data to confirm these assertions, and in keeping with douek’s quote above, no guarantee that this initiative won’t be axed if it comes into conflict with growth and engagement. To moderate these huge networks, I think the answer probably isn’t only using automation or only using humans. It’s most likely going to be a blend for most platforms, and it should be an evolving process to find the right balance between safety and the rights of users.
Taking inspiration from our physical spaces metaphor, in Physical to Digital we look at a feature or design in our built environment and analyze how we might learn from it in terms of digital public space.
Buses go first
What is it? Many large cities with complex bus networks employ systems that allow buses to go first, ahead of cars, at an intersection. Transit signal priority, bus priority signals, or queue jump lanes are all variations on this idea. Sometimes the system employs a separate traffic light just for the bus that lets it have a head start. Other cities have a system where the light changes to green faster for the bus, either through a centralized system or a signal from the bus at the intersection. The idea is for the road design to prioritize public transit, and to privilege the public vehicle carrying many people over the private vehicles typically carrying just one person.
What’s the best version of this? Many urbanists will tell you that bus ridership is one of the easiest single interventions, with one of the largest effects, on racial equity and climate change mitigation in cities. Buses are far cheaper to build and operate than subways, and their infrastructure is incredibly flexible and adjustable. Improving the bus has myriad cascading downstream effects, including improving urban air quality and traffic congestion. So why don’t more people ride the bus? Typically, because the bus is not reliable and it doesn’t get people where they want to go fast enough. So giving buses priority at the traffic light is intended to help them get to stops on time and make trips faster. Priority signaling is just one of a few strategies, like bus-only lanes, that cities have implemented in recent years to improve their bus systems.
What would it be online? It’s not a perfect 1:1 analogy, because platforms are not quite like urban transit agencies, but it’s undeniable that they operate in the public sphere. And occasionally, platforms do meet the moment and put the public interest front and center. In the last few years, platforms have highlighted resources on how to protect yourself from Covid-19, or how to vote in the 2020 election.
These efforts are positive, but somewhat limited in scope, and as usual, it’s difficult to get a full sense of them without better transparency and data reporting. To truly put the public interest ahead, platforms should consider taking the following steps:
Take ownership of the resources they share and not outsource all responsibility for the truth to third parties like fact checkers, journalists, and governmental/global institutions.
Don’t earn advertising revenue off of content focused on the public interest or derived from time spent on the platform because of that content.
Openly share information about how they put these resources together and how they are being used and presented.
Build resources in collaboration with competitors, instead of secretly smearing them to the press through intermediaries.
Community cork board
Does your organization have an event, report, or initiative that’s relevant to our interests in healthy digital public spaces? We’d love to hear from you, and share the details here in our newsletter. Simply reply to this email or use our contact page.
California Common Cause and PEN America Los Angeles are hosting a panel conversation on April 6 about statewide media policy solutions for supporting local and ethnic media. “The loss of local and ethnic media outlets has left a void—a void that has quickly filled with online disinformation targeting and exploiting California’s communities.” How can we create a more robust news ecosystem? More info here.
Just a quick note that we are moving our regular monthly Open Thread back one week, and it will be emailed out on Tuesday, April 12th, at 12pm ET.
Also, we’re still seeking parents or caregivers, design experts, technologists, and researchers for our education design sprint in Oakland, CA, in late April. Please see this form for more information.
Taking responsibility,
Josh
Design by Josh Kramer. Traffic light via Google Maps. “Know the facts” screenshot from Twitter.
New_ Public is a partnership between the Center for Media Engagement at the University of Texas, Austin, and the National Conference on Citizenship, and was incubated by New America.