2 Comments
Nov 18Β·edited Nov 18Liked by New_ Public

It is nice (sorta vindicated feeling) to see that some of the ideas that I have theorised and experimented with in the past, in terms of user-driven and user-motivated content moderation, are being indirectly validated by these field experiments. The work that I am refereeing to is "Proactively Reducing the Hate Intensity of Online Posts via Hate Speech Normalization", https://arxiv.org/abs/2206.04007.

Expand full comment

I mostly study and work in social VR spaces, and have long been advocating for more thorough onboarding (meaningful friction) coupled with restorative justice principles to increase the frequency of high-quality interactions. Of course, this would require more resources on moderation teams and that feels like a pipe dream as it stands. Thanks for the read.

Expand full comment