📏 Who Makes the Rules Online And Why Do We Follow Them?

Understanding and punishing bad behavior online.

In the Facebook Oversight Board’s recent decision on whether to reinstate Donald Trump, they poked at the platform’s arbitrary rules. “It is not permissible for Facebook to keep a user off the platform for an undefined period,” the board wrote, “with no criteria for when or whether the account will be restored. In applying this penalty, Facebook did not follow a clear, published procedure. ‘Indefinite’ suspensions are not described in the company’s content policies.”

Michael McConnell, the co-chairman of the Oversight Board, told Bloomberg that, “Their rules are a shambles. They are not transparent. They are unclear. They are internally inconsistent.”

These insights are not new to Tracey Meares, a Yale University Law professor and co-founder of The Justice Collaboratory. Meares and her colleagues have been calling for reform of the Oversight Board’s design and operation since its inception. The social media governance work that the Justice Collaboratory does is based on procedural justice, the idea that fairness in  processes is important alongside the just-ness of the results. 

Meares says, “If you actually care about the kinds of interactions that people are having in this space, as a governance matter, then you would try to maximize an architecture that looks more like Reddit and groups, than you would build for ‘anybody gets to come’” that exists on Facebook and Instagram. 

The JC value collecting qualitative data on tech platforms and user moderation and see it as an important path to establishing pro-social communities online. Pro-social strategies enhance cooperation, engagement, and internalized rule following within our largest social platforms like Facebook and Instagram. Social media governance is the “digital” manifestation of police reform. 

In this week’s newsletter, we chat with Meares about her work in social media governance, online spaces and morality, and her ideas on how to reform our platforms from the inside out. The following conversation occurred on Zoom and has been edited for brevity and clarity. 

Q&A

New_ Public: How did you become committed to social media governance as a cause?

Tracey Meares: The work that I do is about understanding the problems associated with criminal legal system processing. Criminal legal system processing is not functionally criminal “justice.” It's obvious they're fundamentally at odds with each other. So what do we do about that structure? One way is to explore social science theories around how people come to conclusions about the fairness of legal authorities, and authority, in general. 

The theory of procedural justice has become really important to my work. The digital space is critical for achieving progress on democracy and institutions because so much of the public spends time in this arena. It was pretty clear the ways in which social media platforms were focusing on addressing wrongdoers as an importation of ‘people obey the law because they fear the consequences of being punished.’ 

People who want to change the current system are seeking an analogy, then thinking about an architecture that would allow us to achieve the analogy. But we can make whatever we want. So why don't we figure out what we're looking for? The step people often make is to name a place like a library, but the library is the way it was, because that's all we had to work with. Library, by definition, meant putting books in there. It doesn’t even mean that anymore. 

Now that you're spending time thinking about online spaces, what transfers from physical spaces and what does not?

What clearly transfers are conceptual frameworks about compliance: “We need to punish wrongdoers.” “We need to find the bad actors.” But we need to punish them in a way that conforms with this assumption that people will obey the law, if they fear the consequences of failing to do so. [That] gives you a certain kind of regime of punishment. 

The first step is to be kicked off online for 24 hours and then platform designers need to escalate. They need to escalate it because that's implicit in a theory of deterrence. The theory that I work with is about internalized rule following: Tell people what the rules are. Until a few years ago, Facebook never told anyone what the rules were. Most people actually want to comply with rules. So give them visibility. You encourage them to internalize rules by treating people in a certain way rather than assume they're going to be rule breakers. 

What doesn’t apply? Here's one idea that Ethan Zuckerman said recently, ‘being kicked off of Facebook is not prison.’ I don't need it to be as bad as prison for you to find the analogy. The analogy is the assumption of the theory is compliance and that the way you get people to obey is deterrence. 

That there are consequences to your actions?

What is more persuasive is recognizing that the surfaces that you're dealing with are very different. It might be easier to apply some of these ideas to a space like Reddit. As contrasted to everybody just gets to go like on Twitter. Facebook is somewhere in between. Some of it is that everybody gets to go on the UX context if your page is public, as opposed to groups on Facebook, where people can articulate rules. 

If you actually care about the kinds of interactions that people are having in this space, as a governance matter, then you would try to maximize an architecture that looks more like Reddit and groups than you would build for ‘anybody gets to come’. 

It's just another way of understanding when you go to the library, you don't go to a park. What is the desirability of having lots of open park-like space (where anybody gets to do whatever they want) as opposed to libraries and museums. The designers of these spaces can make choices.

But here’s what is fundamentally different about these spaces compared to the real world. By definition, this technology makes surveillance built in. We should be thinking about how the architecture builds in the possibility of constant surveillance, which is not true in real physical life. 

So what about surveillance? 

We have literature that helps us think about that. I'm talking about Thomas More's Utopia, written in 1516. He imagines his world in which everybody living on the little utopian island are constantly spying on each other. There's no government. There's no state, per se, no police, because everybody is watching each other. 

Can you give me some examples of internalized rule following in the real world?  

Stopping at red lights. Let's say it's 2AM and you live in New Haven, CT. It's not New York where tons of people live. There's nobody around, you're driving home from work. There's no particular reason you need to stop while you're driving home from work. There's no cop. There's nobody in the car with you. You have no formal deterrence, you have no reason to think that you're going to get caught and nobody could get hurt. You follow the law just because it's right.

But what if you don't agree? What if you think it is stupid for you to stop at a red light at 2AM when you want to go home? It's not because you fear the consequences? You do it because you think that the law itself was adopted by a process that you deem legitimate. You think that the person who is telling you — this is a law that ought to be obeyed — is legitimate so you obey them and it. You could think of that online too.

Watch the video “Red Light: Why Do We Stop?”

How do tech platforms think about justice and what does that mean about how they think about people and morality?

The people who comprise the management of tech platforms swim in the world of individualism. Why do people follow rules? 1.) Because they fear the consequences of failing to do so, 2.) because they think it's the right thing to do, or 3.) because they think the relevant authority has the right to dictate to them proper behavior. The people who comprise the leadership of tech platforms are ordinary people so they'll say #1 or #2. They follow the design with what is consistent with those ideas.

Let's think about driving. Cars are a dangerous technology. A long time ago, cars would kill people and people would die in cars all the time. One way of managing that problem is to after-the-fact, (what tech companies do) is get the bad drivers off the car, off the roads, or the bad cars off the road. Well, that might address some of your problem. But there's a lot of things you can do before the accidents happen. You can start by looking at why people are bad drivers. They might fall asleep at the wheel. 

Why is it so hard for tech companies to adapt to measurable, pro-social techniques?

If your entire orientation of the problem space is to identify bad actors, then all of your metrics are organized around those. It requires that you develop new metrics or taking the metrics that exist and using them in some different way. Pro-social requires that you look at a different set of metrics that have to be developed. You would need to be focused on activities, that aren’t in their rubric, that don't count as something that should be counted, like measures of stability, or how to assess civility.

A lot of the way that they understand the problem is about misleading information. People get to say whatever they want until it's a lie. The problem with that is there's all these truthful things that you can say, that are really problematic for creating a space in which people interact in a healthful way, in a way that's conducive to healthy civic spaces, in a way that's about helping people to understand their role in a democracy. You can weaponize the truth. 

If you could wave a magic wand to get tech folks to understand or do one thing differently, what would it be?

I need deeper engagement with a wider spectrum of social scientific theories. Their business model has caused them to invest into cognitive psychology that privileges instant gratification. 

I try to be scientific but I've seen people who interact in this space who can't get off of it, it's an addiction.

There's a whole bunch of other social science theories that help you understand intergroup relations, not instant gratification. There’s political theory or understanding the sociology of the ways in which communities are constructed, or studying history, or just reading W.E.B. Du Bois.

Learn more about Meares’ work at Yale University’s The Justice Collaboratory


A Real Example

In April, Twitter locked the account of Street Sense, a Washington D.C. publication for people experiencing homelessness. Twitter mistakenly determined that Street Sense was a person who had been underage when the account was created in 2009, and because of that, they were in violation of the rules and suspended from use of the platform.

Editorial director Eric Falquero said the experience was embarrassing, and the suspension came at an inconvenient moment, when important news was breaking for their readers. “They stopped limiting one of our primary communication channels for no reason other than relying on automated enforcement of a blunt rule that doesn't fit for all users,” he says. With support requests “going nowhere,” the issue was finally resolved when a local Twitter employee noticed the problem and escalated it. For Twitter, it’s a small inconvenience that was resolved quickly. But considering the terms of procedural justice, the platform failed Street Sense and its community.


What’s Clicking

🌐 Online

🏙 Offline: Design from Cities

  • Spencer Silver, the creator of Post-its, invented a solution to a problem that did not appear to exist. (New York Times)


Do you have a tip or story idea that you would like to share with New_ Public? We truly value your feedback. Send an email to hello@newpublic.org.


Headed to the library,

The New_ Public team

Civic Signals is a partnership between the Center for Media Engagement at the University of Texas, Austin, and the National Conference on Citizenship, and was incubated by New America.