🤬 Can Machine Learning Break the Cycle of Violence?
Desmond Patton of SAFElab talks about the phenomenon of physical violence landing online.
More and more we have to consider how violence in the United States is affected by tech policy and design. This week we speak with Desmond Patton from Columbia University’s SAFElab, a research project that examines the phenomenon of violence in the lives of youth of color and how it plays out on social media. Patton’s lab uses social media analysis to de-escalate threatening messages online and has created algorithms to detect harmful words used in posts before they spark violence in real life.
He says, “These [tech] companies have trust and safety spaces, but I'm not feeling the trust and safety yet.” Since 2012, SAFElab has worked with an array of specialists in fields of computer science, social work, psychology, and sociology to probe the damaging effects of how physical gang violence can be translated to online intimidation. Recent projects include developing science and methodologies for leveraging social media as a data set for gun violence, and using machine learning and computer vision to automatically identify psychosocial concepts in social media data like loss, grief, aggression, and substance use.
“We care about the safety and wellbeing of young Black and brown young people. We also know that the tools that we create can be used against them. So what is the middle ground? That's the theoretical work that we've been doing,” says Patton.
The lab has been ethically challenged by the dilemma of leveraging the social media of young Black and brown youth. They don’t partner with police in any way and work with community members, non-profit organizations, and young people. They work hard to protect their data so as not to criminalize communities of color or pass off information to law enforcement.
They have also analyzed and documented their own internal biases in how they label data and studied how the machine automatically assumes labels. Their goal is to develop an algorithm system that can identify digital signals of grief that could be used universally. Patton says he receives Google Alerts every day of the same kind of behavior he studies online in Chicago, London, Germany, South Africa, and India. The implications for his grief work are society-wide but also in combating violence that could change communities and how they care for their people.
He says, “There's so many other behaviors and dynamics that are happening on social media within Black and brown youth that aren’t about aggression, loss, and substance use. They're living their best lives or have joy and love. Those things are important too.”
Q&A
New_ Public: We're expecting people to change and culture to change, but we haven’t learned how to acknowledge our inherent biases. Why is that?
Desmond Patton: I agree. There is this uncomfortability with talking about biases as if we don’t all have biases. It's a part of how we grew up, where we grew up. If we work on identifying them ourselves, we can see how they have been used or misused in all types of spaces. For SAFElab, it's about the labeling and interpretation of social media content where bias readily shows up, immediately. Some of the biggest issues are: What is a threat online? What is grief online? And who gets to determine and define what those concepts are? We've had to construct a methodology that forced us to wrestle with that bias, because biases go into the labeling process. The labeling process then feeds types of content that the machine learning algorithm is going to automatically identify.
Your multimodal social media analysis for gang violence prevention includes the psychosocial codes of aggression, loss, and substance abuse. What was the decision making process to emphasize these three terms?
Before partnering with computer scientists, I've done qualitative work in communities in Chicago for years. The themes that kept coming up between the social media and the offline world were issues of substance use, grief, and loss. Those psychosocial codes were informed by prior qualitative work where young people will consistently mention these three codes.
One of the hardest things about being a social worker and partnering with computer scientists is this need for binary classifications. There are lots of other behaviors and dynamics that are unfolding on social media, but we can only study them within tightly-wound codes.
‘Neighborhood,’ ‘love,’ ‘joy,’ and ‘relationships’ all get coded, but they were coded ‘other’ because they weren't in the predominant codes. There's so many other behaviors and dynamics that are happening on social media within Black and brown youth that aren’t about aggression, loss, and substance use. They're living their best lives or have joy and love. Those things are important too. We were trying to move into a direction where we're having a more holistic conversation about the lives of young people. It is a universal thing for academics to focus on risk, and that's a trope that I also fell into, but there's an opportunity to think bigger and broader in this space.
You have developed software that analyzes patterns of gang activity online, and you're looking for ways to make that tool fair. Where do you start?
I'm so concerned about how a tool could easily be misused against community members that I don't want that to be a part of our process. The start and the end is community engagement.
We should be in partnership with communities to think about these problems. I would like for community members to say, ‘Can you help us? Can we partner together on developments to match the needs of my community? This is how it should look. This is how it should feel. These are the features that are important.’
This year has been a big year for escalating anger, rage, and violence online into the physical world. What is your stance on that? What are some solutions to address escalation?
I can train people to look for signs of aggression or use funny memes or comedy to de-escalate, but if you still have the features that allow for content to stay online that is harmful and hateful, then it’s not going to matter. We have to have synergy between tech companies, policymakers, and community members around how we can triangulate these efforts and work together. Users need more control. They need to have the agency to take control over what's happening.
The companies have to make decisions on how to handle bullies and how to handle harmful speech. If your bottom line is to keep content that harms people, then that's exactly what it's going to do. These companies have trust and safety spaces, but I'm not feeling the trust and safety yet. Part of the problem is they hire attorneys, but I'd love for them to think about education, think about de-escalation, think about inclusivity and bring in more diverse folks to work in those spaces.
What is big tech's responsibility in this space?
They got to put their money where their mouth is. I'm not seeing the big and long term investments in this space. The trust and safety units at Facebook or Twitter, do they have any power within the company? They may have group-based decisions on what stays and what goes, but I'm not sure that it trickles up in a way that becomes cultural and states ‘You don't do this on our platform.’
George Floyd's killing was captured on video by a teenager and shared online through social platforms. How do you unpack this dynamic with your high school students? Is social media a tool? A weapon? A form of expression?
Social media is a tool. Our responsibility as adults and educators is to help get people to think about the diversity and variation of that tool. If you use a video camera to document fights at school is it the same as this young woman who documented the murder of George Floyd? No. There's different ways of using this tool. I think you have to engage young people in a conversation around those differences. And now that we sadly have these examples, you can engage in hard and difficult conversations about best uses. That's really important as we move forward, because one of them could be considered advocacy and the other one could be considered exposing.
We have to be able to have those conversations. It’s not built into our education system. It's not a part of learning about digital citizenship. There's so much we need to do to create a better digital world.
Do we need social media intervention? Or mediation?
There are some situations where an intervention is called for. If you see an escalating conversation online where there's one post and two candid comments, and then the next week, someone goes negative, should there be an intervention?
There needs to be education around how you engage social media, how you connect with people on social media, and allowing people to have a more informed decision about how they want to be online.
I see a lot of our young people kind of stuck because social media is so popular. If you're not online, it is terrible. It is a detriment to your social status. Young people are having to make really hard decisions. Young people traditionally don't make very good decisions about safety, because their brain is not fully ready to make those types of decisions.
Learn more about Patton’s work with SAFElab here.
What’s Clicking:
🌐 Online:
Amazon and others are indefinitely suspending police use of face recognition products. They are considering technology’s role in false arrests, and bans on it put in place by almost two dozen cities and seven states across the US. Proposed legislation could make bans bigger or more permanent. (MIT Technology Review)
A Black man from South Africa was shot and killed in an encounter with the police in his adopted home, Hawaii. Now South Africans are demanding justice for his death. (The New York Times)
What social media posts can tell us about gang violence. Can artificial intelligence help prevent gang violence from happening in the first place? (WTTW)
Mob violence against Palestinians in Israel is fueled by groups on WhatsApp. (The New York Times)
Detecting and reducing bias in a high stakes domain. High stake scenarios and accuracy alone cannot guarantee a good system. (ACL Anthology)
🏙 Offline:
You are a network. An emerging theory of selfhood. (Aeon)
A rabbi, a Tlingit tribal leader, an entrepreneur, a sociologist, and others walked into a virtual room… (The Ideaspace)
The social graph of society is civic infrastructure, but too few people really understand how this needs to be nurtured and maintained. (Danah Byod)
Do you have a tip or story idea that you would like to share with New_ Public? We truly value your feedback. Please send us an email to hello@newpublic.org.
Looking to live our best lives online,
The New_ Public team
Illustration by Josh KramerCivic Signals is a partnership between the Center for Media Engagement at the University of Texas, Austin, and the National Conference on Citizenship, and was incubated by New America.