Trust an algorithm, or trust your neighbor?
What happens to democracy when algorithms run our lives
In 2020, Xinyuan Wang, a PhD student at University College London, conducted a 16-month ethnography of Chinese residents to gather their perceptions of the country’s social credit system. A typical response she captured: “I can’t wait for the implementation of the social credit system, there will be less fraud for sure.”
Westerners might dismiss this sentiment as false enthusiasm to avoid danger from state censors. But in her analysis, Wang found that many Chinese citizens saw their nation’s proposed social credit system—currently a fragmented set of policies that aim to enforce certain regulations and incentivize good moral behavior—as a way to alleviate a lack of social trust.
The connection of social credit to trust raises interesting questions about our own systems of algorithmic surveillance in the United States. The truth is that we have our own trust problem. According to the World Values Survey, in the early eighties, 44 percent of Americans thought their fellow citizens could be trusted most of the time, but that percentage fell to the mid-thirties by 2014. This lack of trust is particularly pronounced among partisans: according to Pew, in 1994, 21 percent of Republicans and 17 percent of Democrats had “very unfavorable” views of the other party. By 2016, those numbers had nearly tripled.
Mistrust has made us vulnerable to our own version of algorithmic control. Algorithmic surveillance aims to ensure that citizens behave in a manner consistent with a society’s core values—as interpreted by the state, in China’s case. But we can think of ours as a private credit system that regulates behavior algorithmically for the purposes of private gain rather than state control. Our version is pernicious in different ways because it folds into our individualistic notions of individual agency: we choose which Netflix shows to watch, which political Facebook post to like, which Spotify songs to play, which Amazon item to purchase. To the extent that companies use their algorithms to recommend items to us, they are presented as extensions of items we already consume. It is as if algorithms are “giving us back to us.” But this comes at a cost.
It’s not just that we are being mined for information. I believe this algorithmic catering to our private desires only serves to make us trust each other less. We live in a self-perpetuating social cycle where the more time we spend as isolated individuals consuming algorithmically-curated culture, the more we trust algorithms over each other. If what we watch, who we listen to, and how we speak become structured by processes of data extraction, we may become a people unable to come together across differences to solve collective problems. We may begin to see politics as an exercise in personal wish fulfillment rather than as a site for collective action.
In my class on Algorithms, Data and Society, I ask my students to reflect on why they object to China’s social credit system. “Is your objection to the fact of surveillance or is it that a state is doing the surveilling?”
I remind my students that even they may be in their seats due to algorithms. Increasingly universities are hiring vendors like Salesforce or Kira Talent to aggressively identify and market to prospective students. They run a complex web of formulas on a wide range of personal data that determine when and how students should be contacted, whether they should be admitted, how much aid should be offered, and what other strategies would get them to enroll. They even monitor student behavior to ensure they stay on course for graduation. Some universities have gone as far as tracking athletes using geolocation data to make sure they are attending class.
These algorithmic relationships are not two-way exchanges. Increasingly, algorithms have become a tool of the powerful to reduce the costs associated with contract enforcement. In a 2015 paper, Shoshana Zuboff cites Google Chief Economist Hal Varian describing how driverless car neural networks, geolocation data, and consumer databases can work together to empty contracts of uncertainty:
If someone stops making monthly car payments, the lender can ‘instruct the vehicular monitoring system not to allow the car to be started and to signal the location where it can be picked up.’
It’s not unlike how local governments increasingly turn to policing and algorithmic surveillance, rather than community engagement, to address crime. And while most of the algorithmic encounters we have in this society do not leave us locked out of our cars, Zuboff’s example points to a possible dystopian future. If massive amounts of data and computational power are being employed to guide or constrain our decisions, then how much can we trust the authenticity of our choices, and how can we trust each other?
Sociologist Robert Putnam’s work in the early 2000’s highlighted the damaging social and political effects of mistrust. Communities with low levels of trust engage in what he calls “hunkering” or withdrawing from social life, which, he argued, reduces their ability to engage in collective action to address problems in their communities. But it can have even more dire consequences. In The Origins of Totalitarianism, philosopher Hannah Arendt saw social isolation from one’s community as a precondition for accepting totalitarian views.
To regain trust, we must regain control of the algorithms that govern our lives. So how do we use algorithms in ways that preserve individual autonomy and freedom? To address this dilemma, we need to understand algorithms as sites of power and find ways to restore the balance. We should assert our digital rights to not have data be used unfairly to manipulate or control us—and this must be done collectively.
One such model might be found in the United Kingdom, where a group of Uber drivers have fought to close the balance of power with the ridehail platform by creating an organization called the App Drivers and Couriers Union and demanding access to their driver data under the EU’s new data privacy laws.
As the union’s cofounder James Farrar explained in an interview with (New_ Public Magazine Editor) Wilfred Chan for Dissent Magazine, this data would empower workers to know:
How much did I earn? How was my time utilized on the platform? What was the quality and quantity of work I was offered? And if I was fired, why was I fired?
Crucially, Farrar’s group has also sought access to information about the algorithms that control drivers. For example, how they nudge drivers to make decisions that are in Uber’s interests. The union members hope to use this data to inform legal action and labor negotiations.
How can we scale this kind of collective action? Getting citizens to play an active role in the institutions that govern them has never been easy, but we have decades of research into best practices for encouraging participatory governance. Citizen oversight boards for police departments, Chicago’s local school councils, and platform cooperatives offer models for how we could empower communities to ensure the responsible use of algorithms. And to avoid “digital nimbyism,” where a minority of loud and well-resourced people serve on these boards to protect narrow interests, an algorithm could randomly select community members to participate.
Getting private organizations to reveal their algorithms—which many firms regard as proprietary information—might be harder without legislation. As we consider crafting comprehensive data rights policies in the United States, it’s worth noting that a number of scholars have proposed ways for companies to retain their intellectual property while providing users with knowledge of algorithmic processes. Ethan Zuckerman’s idea of algorithmic audits is a promising approach to balancing the needs of companies and workers.
Our goal should be to make algorithms into tools to help us expand—not restrict—the ways we can live our lives. In her book, The Politics of Possibility, sociologist Louise Amoore argues for designing algorithms as tools that can present decision-makers and community stakeholders with multiple possibilities for identifying and addressing social problems, rather than seeing algorithms as the solver of problems. This means agency stays with people, not machines.
I imagine few people in the United States want a Chinese-style social credit system that enforces the state’s ideas of morality. But neither should we accept a private version that steers us away from exercising our power. If corporations use proprietary data and algorithms to optimize for problems of their choosing, workers and communities can use the same data and algorithms to optimize for our own goals. But this can’t be done individually. Whether through labor unions, data trusts, citizen councils, or platform cooperatives, only collective action can counter algorithmic control—and teach us to trust one another again. 🌳
José Marichal is a professor of political science at California Lutheran University. He studies the role that social media plays in restructuring political behavior and institutions. Currently, he is working on a book that looks at how algorithms impact political identity.
Illustration by Josh Kramer.