Jeanna Matthews from Clarkson College gave a great talk at our AI4Society Ethical Data and AI Salon on “Creating Incentives for Accountability and Iterative Improvement in Automated-Decision Making Systems.” She talked about a case regarding DNA matching software for criminal cases that she was involved in where they were able to actually get the code and show that the software would, under certain circumstances, generate false positives (where people would have their DNA matched to that from a crime scene when it shouldn’t have.)
As the title of her talk suggests, she used the concrete example to make the point that we need to create incentives for companies to test and improve their AIs. In particular she suggested that:
- Companies should be encouraged/regulated to invest some of the profit they make from the efficiencies from AI in improving the AI.
- That a better way to deal with the problems of AIs than weaving humans into the loop would be to set up independent human testers who test the AI and have a mechanism of redress. She pointed out how humans in the loop can get lazy, can be incentivized to agree with the AI and so on.
- We need regulation! No other approach will motivate companies to improve their AIs.
We had an interesting conversation around the question of how one could test point 2. Can we come up with a way of testing which approach is better?
She shared a link to a collection of links to most of the relevant papers and information: Northwestern Panel, March 10 2022.