4 Comments
Jun 14, 2022Liked by New_ Public

@ Caroline Sinders -- Your article was very insightful, and as a result of reading it I plugged in the Perspective API into my forums to see how it fared. Without getting into the weeds, it analyzed 5800 comments over the past three days, and it didn't fare so well. Lots of false positives on innocuous content...so many that it is not a viable moderating tool for my purposes, as far as I can tell.

That aside, I wonder if you believe that any organization can really fairly implement the transparency you're suggesting is needed? AI algorithms are often constantly changing, and the internal workings and how solutions are derived are often times are not even very clear to the software engineers themselves. You could implement a series of test cases to test for fairness, I suppose, but then you run the risk of software engineers "teaching to the test" rather than achieving real fairness.

When you add it up, all the things on your wish list for transparency amount to a very large workload to compile and maintain and keep up with, in a neutral way, perhaps even more work than creating the algorithm itself. It's a big ask.

I almost wonder if, for very big and influential algorithms, there almost needs to be panels of independent rating agencies (similar to news fact checkers) that try to grade algorithms on the criteria you laid out in your article?

Expand full comment
author

Don't be shy, ask away! Any questions for the Congressman from Silicon Valley or the machine learning expert?

Expand full comment

@ Rep. Ro Khanna -- I wonder if you have a solution for how to moderate public online places, hosted by the government, that doesn't immediately get overrun with "outrage" for being biased and censorious? I know that the rules for my (privately owned) forums are fairly strict and when people get angry at the rules I can tell them “Sorry, this is a private social club, and we have strict civility rules.” Even then, I get a lot of anger from both the Left and the Right, about bias and unfairness. There is a lot of difficulty in moderating in a way that most people consider neutral – text can often be interpreted several ways, often text needs to be moderated for the intent of the message even when the letter of the text would not technically break rules, often people literally cannot take themselves out of their own world view enough to see a neutral position, often people can post innuendo and conspiracy-theory type posts that are not clearly breaking guidelines but are detrimental to the community. Language is such a slippery thing. How does the government host these kinds of discussions in a way that makes people feel satisfied that it’s somewhat neutral and accommodating?

Expand full comment