4 Comments

@ Caroline Sinders -- Your article was very insightful, and as a result of reading it I plugged in the Perspective API into my forums to see how it fared. Without getting into the weeds, it analyzed 5800 comments over the past three days, and it didn't fare so well. Lots of false positives on innocuous content...so many that it is not a viable moderating tool for my purposes, as far as I can tell.

That aside, I wonder if you believe that any organization can really fairly implement the transparency you're suggesting is needed? AI algorithms are often constantly changing, and the internal workings and how solutions are derived are often times are not even very clear to the software engineers themselves. You could implement a series of test cases to test for fairness, I suppose, but then you run the risk of software engineers "teaching to the test" rather than achieving real fairness.

When you add it up, all the things on your wish list for transparency amount to a very large workload to compile and maintain and keep up with, in a neutral way, perhaps even more work than creating the algorithm itself. It's a big ask.

I almost wonder if, for very big and influential algorithms, there almost needs to be panels of independent rating agencies (similar to news fact checkers) that try to grade algorithms on the criteria you laid out in your article?

Expand full comment

Hi Joe! What a great question :) And this is something I struggled with when outlining and thinking through 'well what IS transparency' and the sad realization is, I think, to be meaningfully transparent it is a bit of a burden on the creator, and something that a lot of for-profit entities would view as too much work. If it's helpful, I'm coming from the human rights and civil society space, where specific and clear disclosures, methodologies, and rationales are necessary (eg we did this on this date, in this way, at this time, and decided x y z).

Additionally, with this point of your's: "When you add it up, all the things on your wish list for transparency amount to a very large workload to compile and maintain and keep up with, in a neutral way, perhaps even more work than creating the algorithm itself. It's a big ask."--> I agree, it is a big ask! But I also think for a long time we've existed in the 'move fast and break things' and I think it's time to move slowly and work thoughtfully. Thoughtful work takes time, and so does transparency if we want it to be meaningful. I don't think this process can be or should be speed up, and I don't personally think it's an issue that this is more work--I don't think this process can be automated and if we're interested in safety of individuals, make it is time to move slowly and thoughtfully. Perhaps the insurance of safety process should take longer than the making of the 'thing' (in this case, the algorithm), but that doesn't negate the reasons to use the algorithm.

tl;dr perhaps slow is better :)

I think testing for fairness is really key but I'm often inspired by 'threat modeling' and 'designing for the margins' meaning understanding or trying to understand and test with how would different marginalized groups outside of my country be negatively effected by this. Dana Fried, a software engineer, often uses the phrase 'what would 4 chan do' or how would 4chan use my tool negatively as a good testing barometer.

To this point of yours: "I almost wonder if, for very big and influential algorithms, there almost needs to be panels of independent rating agencies (similar to news fact checkers) that try to grade algorithms on the criteria you laid out in your article?"--> I think that's a good idea and if I'm not mistaken, I do think that's what Parity AI is doing, which is great!

Expand full comment

Don't be shy, ask away! Any questions for the Congressman from Silicon Valley or the machine learning expert?

Expand full comment

@ Rep. Ro Khanna -- I wonder if you have a solution for how to moderate public online places, hosted by the government, that doesn't immediately get overrun with "outrage" for being biased and censorious? I know that the rules for my (privately owned) forums are fairly strict and when people get angry at the rules I can tell them “Sorry, this is a private social club, and we have strict civility rules.” Even then, I get a lot of anger from both the Left and the Right, about bias and unfairness. There is a lot of difficulty in moderating in a way that most people consider neutral – text can often be interpreted several ways, often text needs to be moderated for the intent of the message even when the letter of the text would not technically break rules, often people literally cannot take themselves out of their own world view enough to see a neutral position, often people can post innuendo and conspiracy-theory type posts that are not clearly breaking guidelines but are detrimental to the community. Language is such a slippery thing. How does the government host these kinds of discussions in a way that makes people feel satisfied that it’s somewhat neutral and accommodating?

Expand full comment