đ Who Makes the Rules Online And Why Do We Follow Them?
Understanding and punishing bad behavior online.
In the Facebook Oversight Boardâs recent decision on whether to reinstate Donald Trump, they poked at the platformâs arbitrary rules. âIt is not permissible for Facebook to keep a user off the platform for an undefined period,â the board wrote, âwith no criteria for when or whether the account will be restored. In applying this penalty, Facebook did not follow a clear, published procedure. âIndefiniteâ suspensions are not described in the companyâs content policies.â
Michael McConnell, the co-chairman of the Oversight Board, told Bloomberg that, âTheir rules are a shambles. They are not transparent. They are unclear. They are internally inconsistent.â
These insights are not new to Tracey Meares, a Yale University Law professor and co-founder of The Justice Collaboratory. Meares and her colleagues have been calling for reform of the Oversight Boardâs design and operation since its inception. The social media governance work that the Justice Collaboratory does is based on procedural justice, the idea that fairness in processes is important alongside the just-ness of the results.Â
Meares says, âIf you actually care about the kinds of interactions that people are having in this space, as a governance matter, then you would try to maximize an architecture that looks more like Reddit and groups, than you would build for âanybody gets to comeââ that exists on Facebook and Instagram.Â
The JC value collecting qualitative data on tech platforms and user moderation and see it as an important path to establishing pro-social communities online. Pro-social strategies enhance cooperation, engagement, and internalized rule following within our largest social platforms like Facebook and Instagram. Social media governance is the âdigitalâ manifestation of police reform.Â
In this weekâs newsletter, we chat with Meares about her work in social media governance, online spaces and morality, and her ideas on how to reform our platforms from the inside out. The following conversation occurred on Zoom and has been edited for brevity and clarity.Â
Q&A
New_ Public: How did you become committed to social media governance as a cause?
Tracey Meares: The work that I do is about understanding the problems associated with criminal legal system processing. Criminal legal system processing is not functionally criminal âjustice.â It's obvious they're fundamentally at odds with each other. So what do we do about that structure? One way is to explore social science theories around how people come to conclusions about the fairness of legal authorities, and authority, in general.Â
The theory of procedural justice has become really important to my work. The digital space is critical for achieving progress on democracy and institutions because so much of the public spends time in this arena. It was pretty clear the ways in which social media platforms were focusing on addressing wrongdoers as an importation of âpeople obey the law because they fear the consequences of being punished.âÂ
People who want to change the current system are seeking an analogy, then thinking about an architecture that would allow us to achieve the analogy. But we can make whatever we want. So why don't we figure out what we're looking for? The step people often make is to name a place like a library, but the library is the way it was, because that's all we had to work with. Library, by definition, meant putting books in there. It doesnât even mean that anymore.Â
Now that you're spending time thinking about online spaces, what transfers from physical spaces and what does not?
What clearly transfers are conceptual frameworks about compliance: âWe need to punish wrongdoers.â âWe need to find the bad actors.â But we need to punish them in a way that conforms with this assumption that people will obey the law, if they fear the consequences of failing to do so. [That] gives you a certain kind of regime of punishment.Â
The first step is to be kicked off online for 24 hours and then platform designers need to escalate. They need to escalate it because that's implicit in a theory of deterrence. The theory that I work with is about internalized rule following: Tell people what the rules are. Until a few years ago, Facebook never told anyone what the rules were. Most people actually want to comply with rules. So give them visibility. You encourage them to internalize rules by treating people in a certain way rather than assume they're going to be rule breakers.Â
What doesnât apply? Here's one idea that Ethan Zuckerman said recently, âbeing kicked off of Facebook is not prison.â I don't need it to be as bad as prison for you to find the analogy. The analogy is the assumption of the theory is compliance and that the way you get people to obey is deterrence.Â
That there are consequences to your actions?
What is more persuasive is recognizing that the surfaces that you're dealing with are very different. It might be easier to apply some of these ideas to a space like Reddit. As contrasted to everybody just gets to go like on Twitter. Facebook is somewhere in between. Some of it is that everybody gets to go on the UX context if your page is public, as opposed to groups on Facebook, where people can articulate rules.Â
If you actually care about the kinds of interactions that people are having in this space, as a governance matter, then you would try to maximize an architecture that looks more like Reddit and groups than you would build for âanybody gets to comeâ.Â
It's just another way of understanding when you go to the library, you don't go to a park. What is the desirability of having lots of open park-like space (where anybody gets to do whatever they want) as opposed to libraries and museums. The designers of these spaces can make choices.
But hereâs what is fundamentally different about these spaces compared to the real world. By definition, this technology makes surveillance built in. We should be thinking about how the architecture builds in the possibility of constant surveillance, which is not true in real physical life.Â
So what about surveillance?Â
We have literature that helps us think about that. I'm talking about Thomas More's Utopia, written in 1516. He imagines his world in which everybody living on the little utopian island are constantly spying on each other. There's no government. There's no state, per se, no police, because everybody is watching each other.Â
Can you give me some examples of internalized rule following in the real world? Â
Stopping at red lights. Let's say it's 2AM and you live in New Haven, CT. It's not New York where tons of people live. There's nobody around, you're driving home from work. There's no particular reason you need to stop while you're driving home from work. There's no cop. There's nobody in the car with you. You have no formal deterrence, you have no reason to think that you're going to get caught and nobody could get hurt. You follow the law just because it's right.
But what if you don't agree? What if you think it is stupid for you to stop at a red light at 2AM when you want to go home? It's not because you fear the consequences? You do it because you think that the law itself was adopted by a process that you deem legitimate. You think that the person who is telling you â this is a law that ought to be obeyed â is legitimate so you obey them and it. You could think of that online too.
Watch the video âRed Light: Why Do We Stop?â
How do tech platforms think about justice and what does that mean about how they think about people and morality?
The people who comprise the management of tech platforms swim in the world of individualism. Why do people follow rules? 1.) Because they fear the consequences of failing to do so, 2.) because they think it's the right thing to do, or 3.) because they think the relevant authority has the right to dictate to them proper behavior. The people who comprise the leadership of tech platforms are ordinary people so they'll say #1 or #2. They follow the design with what is consistent with those ideas.
Let's think about driving. Cars are a dangerous technology. A long time ago, cars would kill people and people would die in cars all the time. One way of managing that problem is to after-the-fact, (what tech companies do) is get the bad drivers off the car, off the roads, or the bad cars off the road. Well, that might address some of your problem. But there's a lot of things you can do before the accidents happen. You can start by looking at why people are bad drivers. They might fall asleep at the wheel.Â
Why is it so hard for tech companies to adapt to measurable, pro-social techniques?
If your entire orientation of the problem space is to identify bad actors, then all of your metrics are organized around those. It requires that you develop new metrics or taking the metrics that exist and using them in some different way. Pro-social requires that you look at a different set of metrics that have to be developed. You would need to be focused on activities, that arenât in their rubric, that don't count as something that should be counted, like measures of stability, or how to assess civility.
A lot of the way that they understand the problem is about misleading information. People get to say whatever they want until it's a lie. The problem with that is there's all these truthful things that you can say, that are really problematic for creating a space in which people interact in a healthful way, in a way that's conducive to healthy civic spaces, in a way that's about helping people to understand their role in a democracy. You can weaponize the truth.Â
If you could wave a magic wand to get tech folks to understand or do one thing differently, what would it be?
I need deeper engagement with a wider spectrum of social scientific theories. Their business model has caused them to invest into cognitive psychology that privileges instant gratification.Â
I try to be scientific but I've seen people who interact in this space who can't get off of it, it's an addiction.
There's a whole bunch of other social science theories that help you understand intergroup relations, not instant gratification. Thereâs political theory or understanding the sociology of the ways in which communities are constructed, or studying history, or just reading W.E.B. Du Bois.
Learn more about Mearesâ work at Yale Universityâs The Justice Collaboratory.Â
A Real Example
In April, Twitter locked the account of Street Sense, a Washington D.C. publication for people experiencing homelessness. Twitter mistakenly determined that Street Sense was a person who had been underage when the account was created in 2009, and because of that, they were in violation of the rules and suspended from use of the platform.
Editorial director Eric Falquero said the experience was embarrassing, and the suspension came at an inconvenient moment, when important news was breaking for their readers. âThey stopped limiting one of our primary communication channels for no reason other than relying on automated enforcement of a blunt rule that doesn't fit for all users,â he says. With support requests âgoing nowhere,â the issue was finally resolved when a local Twitter employee noticed the problem and escalated it. For Twitter, itâs a small inconvenience that was resolved quickly. But considering the terms of procedural justice, the platform failed Street Sense and its community.
Whatâs Clicking
đ Online
Roboticist Ayanna Howard says our excessive faith in automated systems can lead us into dangerous situations. (MIT Technology Review)
Confronting disinformation spreaders on Twitter only makes it worse. (Vice)
In 2019, seven experts assessed the metrics included in Facebookâs Community Standards Enforcement Report. (The Justice Collaboratory)
A new report finds the child safety problem on platforms is worse than we knew. (Platformer)
đ Offline: Design from Cities
Americans need a manual on how to have a life in a pandemic. (The Atlantic)
Spencer Silver, the creator of Post-its, invented a solution to a problem that did not appear to exist. (New York Times)
The pandemic has shown how public spaces are vital community hubs. (Gehl)
Do you have a tip or story idea that you would like to share with New_ Public? We truly value your feedback. Send an email to hello@newpublic.org.
Headed to the library,
The New_ Public team
Civic Signals is a partnership between the Center for Media Engagement at the University of Texas, Austin, and the National Conference on Citizenship, and was incubated by New America.
I think it's worth lingering on the analogy of driving, and digging in a bit deeper. Setting aside the extreme cases of negligent drivers actually killing people, just consider how often otherwise non-violent, non-aggressive people are highly prone to road rage and aggressive driving when they get inside of these machines (I suspect all drivers have been there at some point or other). There are strong arguments to be made that when we get inside of these private-public machines/spaces as drivers, we 'fuse' with them to become some sort of hybrid creature which no longer behaves like a socially-aware human walking through a busy public space. Rather, in our vehicles, we become weaponized cyborgs, disconnected/insulated from other drivers (and non-hybrids, such as pedestrians), and--working within the communication constraints of this form--find ways to communicate our identities & views of the world via snarky bumper stickers (tweets?) and symbols/gestures (emojis?). Sound familiar?
I would suggest that it's worth considering that trying to promote pro-social behavior online is closer to promoting pro-social behavior with drivers on a busy inner-city road than it is to doing so in a public park or library filled with strangers interacting face-to-face (in a looser, quieter environment). Laws, speed limit signs, road safety campaigns, and good drivers may alleviate some of the worst manifestations of aggressive driving, disconnection, and insularity, but the effects of human-car hybridization seem to fundamentally override the humanity of drivers in many cases. To be sure, there are considerate, good drivers out there... but how much of the non-aggressive behavior on the road is due to self-preservation vs altruism?
How can we better humanize the driving experience and encourage pro-social behavior on the road? Some might suggest by fostering carpooling communities & public transportation (community-driven norms?), increased biking & walking (alternative mediums?), or, ironically, self-driving cars (algorithmic moderation?), but I suspect that many people are far too attached to their individualized vehicles as an extension of their identities/lives to let the thrill and personalized sanctity of them go anytime soon...
Anyway, this topic is far too multifaceted for me to unpack here properly right now. For anyone who is interested in contemplating these areas more deeply, I maintain a collection on Are.na about automobility: https://www.are.na/sean/auto-bodies