🚧 Integrity workers ensure safety on the social internet
💼 The Integrity Institute is a nonprofit for independent integrity professionals
🌻 Digital civil servants could help keep digital public spaces flourishing
Ensuring Safety
In January 2020, we put out our Signals, over four hundred pages of research detailing our surveys, focus groups and literature reviews on the best qualities of digital public spaces. We ended up with 14 Signals that superusers from around the world, and social media experts, agreed were important where people gathered online. We found that foundationally, “the first responsibility of a digital space is to create an experience where people feel, at a bare minimum, welcome and safe.”
Here’s professor Casey Fiesler, of the University of Colorado Boulder, who we interviewed for our Signal called “ensure people’s safety”:
Frequent encounters with triggering content, harassment, privacy violations, disinformation, and hate speech (among others) will not only harm people at an individual level, but also communities at a collective level. Any measures that platforms can take to mitigate these harms and ensure people’s safety on those platforms will result in better experiences, help people realize the benefits of online communities, and help to maintain an active userbase. Moreover, behavior online impacts the broader world, and minimizing problems like harassment and racism online also contributes to minimizing such harm more broadly.
When it comes to ensuring safety on social platforms, there are many kinds of jobs and skill sets involved. Sahar Massachi, an ex-Facebook engineer and the co-founder and executive director of the new Integrity Institute, compares this work on platforms to that of a big city’s police department.
Policing social media
There are lots of different kinds of law enforcement officers at work in any given city. There are officers who work a beat, on the street level, directly engaging with residents. One level up, there are detectives, trying to investigate and solve crimes. And above the whole chain of command, there are even FBI agents and other federal officers with an awareness of complex problems like counterterrorism.
There are different levels at a social media company as well, says Massachi. There are vast armies of content moderators around the world, reviewing flagged material like cops responding to 911 calls. There are also small teams of “FBI-style people” working in threat intelligence, chasing down foreign infiltration and big cases. But for Massachi and the Integrity Institute, the detectives, the people in the middle, are “a category of type of worker that is ignored.” Massachi says:
Ideally what we’d have is a lot more detectives. Here's a ring of people, they're abusing the platform in this way. I've figured it out through patient work over the course of two weeks. And now I'm submitting a report that goes to a product team — here's a loophole you should close.
Policing, Massachi readily admits, is a complicated metaphor. For one, police officers are supposed to be accountable to the public, and are (usually) not employees of largely unregulated private companies. But also, far too many police departments inequitably harm people of color, and they are called to perform tasks that might be better suited to different kinds of public employees. Social workers, with specialized training in mental health (and no lethal weapons), are often better suited to assisting people experiencing homelessness in the midst of a crisis. From urban planners to librarians, there’s a broad range of civil servants who contribute to ensuring safety in a city. And somewhat similarly, the complicated portfolio of ensuring safety on social platforms extends far beyond what looks like policing.
Defining “integrity” work
For the Integrity Institute, the label “integrity worker” applies to a broad swath of employees working on safety. Their site reads, “If you have experience tackling any of these things on behalf of a social network, you’re probably an integrity worker.” They go on to list ethical design, hate speech, disinformation, toxicity, spam, and more than a dozen other areas.
Massachi created the Integrity Institute last year along with another ex-Facebooker, Jeff Allen, to support integrity workers throughout tech. Working at Facebook from 2016 to 2019, including at the lauded civic integrity team, Massachi saw firsthand how demanding these jobs can be:
When we were working at these companies, there would be times which we'd say: Wow, I'm so burnt out. I'm so tired. I'm working like 12-hour days. Tomorrow, or in a week, I'm gonna have to make a big decision or make a presentation to someone who's from a different part of the company and doesn't really get it. And if I had a clear mind, or if I had, you know, a month to prepare the research, I can give a really killer presentation or make the right decision, but I just don't have that. And so I'm going to do a bad job.
Massachi hopes that the Integrity Institute can provide outside help, in the form of research or speciality knowledge, so integrity workers can make more informed decisions or persuade their superiors.
The elephant in the room: growth
Massachi acknowledges that integrity work can be an uphill battle, because efforts to ensure safety on platforms are often in conflict with the number one company priority, growth and engagement. (Although he notes that there are other practices and systems that can conflict with integrity work besides growth, such as doing special favors for politicians.)
Contrary to what you might think, says Massachi, “there's no growth director bursting through the wall like the Kool-Aid man.” Rather, he says, integrity workers will test a new idea for improving a platform on a smaller segment of the userbase and see how it affects growth. “If there's a statistically significant — or even non-significant — decrease in growth and engagement in this new version, then you're in real trouble,” says Massachi. “And you just probably will not be allowed to ship your change.”
Of course, prioritizing growth is normal for for-profit companies. But when safety, and every other concern, comes at the expense of growth, then that can become a serious obstacle for integrity — not just the tech jobs synonymous with “trust and safety” I’ve been writing about here, but the ordinary, dictionary definition of integrity: “the quality of being honest and having strong moral principles; moral uprightness.” How can we possibly trust platforms to make complex determinations about what content is acceptable and unacceptable, and to set up systems that prioritize the safety of users?
Finding consensus
For Massachi, the answer is to take some of those decisions out of the hands of the platforms. Massachi says that groups with legitimate democratic power, from the European Union on down, will have to make macro decisions about what belongs on social media. Depending on your politics, he says, advocates, elected officials, and/or regulators should decide on what’s appropriate.
As we’ve noted many times here, the social internet is young, and some rules and norms haven’t been established yet. But we’ve seen the process of civic-minded traditions and institutions evolving over and over again throughout history, with different technologies. Massachi says that once there’s a legitimate consensus, the members of the Integrity Institute can help with implementing design features to carry out those decisions, like creating fake account detection systems and classifying misinformation.
A digital civil service
The Integrity Institute is just getting started. For now, their focus is mainly on crystalizing what it means to work in integrity, attracting new members, and putting them in conversation. For New_ Public, their mission is really exciting. They are seemingly a whole institution focused on our Signal of “ensuring people’s safety.”
That’s a great start, but I think we might need a whole civil service of professionals dedicated to all of the aspects of flourishing digital public spaces. I’d love to see an independent non-profit or publicly-funded organization focused on supporting workers who “cultivate belonging,” “build civic competence” or work on any of the other Signals. (And if you’re working on this, let’s get in touch!)
There are many people working in those areas at tech companies, of course, but these efforts often lack transparency, accountability, and legitimacy. These are companies enforcing their own rules in their own company towns. But in professionalizing workers, independent of their employers, and creating new digital public spaces with more than just the growth in mind, we have a chance to build some truly amazing places online. We need an array of institutions and professionals, not just digital cops. We need people who care about the users of current platforms, and the ones yet to be built.
Occasionally, we’ll see an article or website that fits perfectly with something we’ve written about previously in the newsletter. In this feature, we will revisit a newsletter essay and offer some additional thoughts.
Misinformation or polarization?
The setup: One commonly held critique of social media is that mis- and disinformation are increasing, and Big Tech is profiting off their spread. Last year, Surgeon General Vivek Murthy thought Covid mis-info was so bad he put out a historic advisory on public health misinformation. Introducing their report on disinformation, the Aspen Institute said “America is in a crisis of trust and truth.”
The newsletter: The Surgeon General’s Office followed up the advisory with a “A Community Toolkit for Addressing Health Misinformation” and I wrote about it for the newsletter. Kyla Fullenwider, Senior Advisor for the Surgeon General, told me that they created the Toolkit in part because they were seeing an intense need to combat misinformation in communities.
The TWIST: In a recent newsletter of his, Matthew Yglesias took issue with the premise that misinformation is rising because of social media. It’s worth emphasizing that Yglesias doesn’t dispute the existence of conspiracy theories and nonfactual information on the internet, but rather challenges the idea that these phenomena are correctly identified as “misinformation.” If anything, he says, “people seem mostly better informed.” Measures of civic awareness, conspiracy thinking, and widespread misperceptions all appear to be relatively flat and unchanging. Rather, what we’re seeing is the amplification of a very politically polarized country. Yglesias writes:
I tend to think that a lot of what is going on is that people see the internet increasing polarization — more people are fighting about politics and saying things they think are really dumb — and confusing that with people being misinformed.
The crux: Yglesias hasn’t quite convinced me. Even if we can’t empirically prove that social media is the cause of increased falsehoods and propaganda, there is a lot of that stuff on platforms. This week, we saw tons related to the conflict in Ukraine. I think it’s appropriate to expect platforms to do everything in their power to limit the reach of false, misleading, hazardous and illegal content. However, Yglesias does reaffirm my belief that there should be a higher bar for using the term "misinformation," especially when phrases like "viral propaganda and lies" are not only more comprehensible, but often more specific to what we're talking about. We also shouldn’t forget that in many cases, platforms are incentivized to spread falsehoods, and they could be doing a lot more to prevent that from happening.
Book club reminder
Our book club is fast approaching. You still have time to buy and read the book: Mr. Penumbra's 24-Hour Bookstore. Our March 8th Open Thread will be about the book, and we’ll be joined by author Robin Sloan for the first half hour. Then, the next newsletter will be my interview with Robin.
Feeling integral,
Josh
Illustration by Josh Kramer. Photo of police siren by Scott Rodgerson on Unsplash.
New_ Public is a partnership between the Center for Media Engagement at the University of Texas, Austin, and the National Conference on Citizenship, and was incubated by New America.