#️⃣ The terrorism database big platforms rely on
GIFCT and the collective effort to delete terror from social media
📢 Terrorism online is a systemic problem that deserves systemic solutions
🤝 The big platforms are already working together more than you think
⏫ Online counter-terrorism efforts have ramped up dramatically since 2019
Last weekend, once again, we learned about horrific, white supremacist violence in the United States. I don’t have a lot to add about the senseless, racist murder of ten people in Buffalo. And I don’t want to spend this week lingering on the argument about how this man’s behavior was shaped by the internet. It’s a worthy subject, but a complicated one, and perhaps better suited to the scientists who study this topic.
For now, I want to talk a bit about what actually happens when terrorism is shared online. Extremist content can seem like a many-headed hydra. Snip off one head, as Twitch did this weekend when it stopped the suspect’s livestream within two minutes, and two others pop up in its place — or several million. Social platforms are not closed systems.
It’s become clear that the platforms not only have a responsibility to keep terrorism off of their own networks, but also to work collectively to prevent violent incidents and remove content related to it. The press, appropriately, has been sharply critical: Big Tech could do more; not enough has changed since Christchurch, New Zealand, in 2019.
But Big Tech has been doing something: mainly creating and growing the Global Internet Forum to Counter Terrorism. Last weekend, GIFCT members quickly identified about 870 visually distinct images and videos related to the shooting, and have been collectively removing copies of them throughout the social internet.
Let’s dive into GIFCT: what it is, how it works, who’s involved, and what its limitations are.
The industry gets a conscience
GIFCT began in 2017, as a consortium between Microsoft (including LinkedIn), YouTube, Twitter, and Facebook (including Instagram and WhatsApp). Collectively, they created a database of terrorist and extremist images, based on the UN Security Council’s “Consolidated List” of terrorist individuals and entities. Then they applied perceptual hashing technology to those images, assigning each a unique digital signature that can be quickly and easily identified across platforms. Similar tech is also used to keep copyright violations and child sexual abuse material off of social networks.
Hashing itself goes back to the early days of computer science, but modern hashing algorithms for photos and video have their roots in PhotoDNA, a collaboration between Microsoft and Dartmouth College in 2009. Here’s a little more about how it works from GIFCT:
Digital signatures for an image or video, perceptual hashes are numerical representations of original content and cannot be reverse-engineered to recreate an image or video. To create a hash, a company converts images to black and white and resizes them so that they are identically formatted, then a mathematical procedure known as Discrete Cosine Transform is used to make a digital signature for the image – our hash. Hashes allow GIFCT members to quickly identify visually similar content which has been removed by one member, enabling it to be re-reviewed by other members to see if the content breaches their terms and conditions. All without sharing any user data between companies.
Two years into the project, the Christchurch shooting was a significant moment for reflection and reassessment for GIFCT and other counter terrorism efforts online. After the Christchurch Call campaign, the original four members announced at the UN that GIFCT would be reformed into an independent non-profit.
A post-Christchurch counter-terrorism project
Since 2019 GIFCT has made a few significant changes:
It has grown significantly, to 18 member organizations. Many of the most popular social platforms in the US are now represented, except Snap and TikTok, which are currently participating in working groups and may yet join. (page 22)
There’s now a three-tiered threat coordination system, with the aim of speeding up removal and streamlining communication. The tiers range from a more routine incident requiring the removal of terrorist-related content, to the highest level, “Content Incident Protocol,” a live-streamed major event on the scale of Buffalo or Christchurch. (There have only been two others since 2019, Halle, Germany, and Glendale, Arizona.)
It has expanded its focus from the limited UN list of terrorists, and it now consults with experts to update its taxonomy, including the growing menace of right-wing extremism.
It has heeded Christchurch Call’s demand for transparency, and has detailed its collective efforts beyond what many of their individual members have disclosed publicly, including releasing its hashing algorithm.
GIFCT members have coordinated on more than 195 incidents since the new threat tiers were put in place in April 2019.
A lot of room for improvement
So we’re all set, right? Well, not so fast. Obviously, GIFCT is not perfect.
Structurally, GIFCT is funded by its members (page 37) and governed by employees of its founding members. Even with its new independent status and impressive leadership, it’s unclear to me that GIFCT has the power and incentive to challenge its members and demand changes when needed. Having an independent, outside group coordinating terrorism response makes sense to me, but it also offers a convenient excuse to members, who can point to their membership in GIFCT as meeting their responsibility for safety on their platform.
On the tech side, even if hashing is working like it’s supposed to, it’s possible that it's being overwhelmed by millions of duplicate posts going up every minute. Structurally, there are a few big holes: the hashing system merely informs member platforms that the offending content is present, but there’s no enforcement mechanism or standard for speedy removal.
Also, not every website, forum, service, and platform is going to join GIFCT. Even if TikTok joins, video hosting site Streamable appears to have played a significant role in keeping the Buffalo shooter’s livestream available. In the last few days, reporters have shared story after story about the livestream and manifesto bouncing from one site to another.
GIFCT also needs faster responses and clearer communication between platforms and to the public. Its statements are too opaque and unclear, and one result is that too few people know they exist. Take some of that membership money and hire more social media people!
If you’re in the mood to be generous, you could say, “that’s a lot of progress for two years, and it seems like GIFCT is just getting started.” After all, the membership is learning, growing, and doing tabletop exercises at The Hague. Maybe the organization just needs a bit more time to build collective power and prove its competence. After all, success is hard to see in this area — it’s harder to brag about people not seeing extremist content. And GIFCT is making progress technologically, adding third party review to its hash database and expanding the criteria for hashed documents to include PDFs, including manifestos.
Alternatively, you might read the above and say, “It’s too late. GIFCT got a late start, and now the problem is out of control and growing exponentially faster than any solution.” Terrorist content begets more terrorism, and there’s no way to remove all of it, permanently. It’s worth stopping to consider: should more of the emphasis be on prevention? The Buffalo shooter reportedly used Discord, a GIFCT member, as a personal repository and planning journal until the moment right before the attack:
Criticism is healthy, and I hope GIFCT and its members are pushed to keep investing in counter terrorism and redouble their efforts. There’s so much work to do. But I’m also glad to know that GIFCT exists, and that it’s making progress. The effectiveness of the Christchurch Call shows that outside activism can have a significant impact in pushing for reform. More than ever, we need an organization with as much power to remove terrorist content as the terrorists have to push their hatred and violence online.
Community Cork Board
Panel: Aside from the content of this week’s newsletter, talking about social media can be a lot of fun. Join Joi Rae, New_ Public Head of Operations and Partnerships, on May 26 at Digital Void Festival: Brooklyn. Our readers can use promo code BKLYN at checkout to take $5 off (tickets are $15).
Jobs: We’re currently seeking great, smart folks to come join our team in the following roles: Chief of Staff, Researcher, Chief Operations Officer, Head of Product, and Head of Community. Also, we have internships in communications, design, operations, and research. See details and apply here.
Emailing Joe: Just a reminder that you can still send us messages for Joe from last week’s Corkboard if you didn’t get a chance. He’d love to hear from you!
Magazine! Very soon we’re launching the new, Trust issue of New_ Public Magazine, right here in the newsletter. The next newsletter will come out a couple days later than usual, on Tuesday, May 31, when those of us in the US are back from the long weekend. Stay tuned!
New_ Public is a partnership between the Center for Media Engagement at the University of Texas, Austin, and the National Conference on Citizenship, and was incubated by New America.