In 2020, let’s stop AI ethics-washing and actually do something – MIT Technology Review

But talk is just that—it’s not enough. For all the lip service paid to these issues, many organizations’ AI ethics guidelines remain vague and hard to implement.

Thanks to Oliver I came across this call for an end to ethics-washing by artificial intelligence reporter Karen Hao in the MIT Technology Review, In 2020, let’s stop AI ethics-washing and actually do something The call echoes something I’ve been talking about – that we need to move beyond guidelines, lists of principles, and checklists.  She nicely talks about some of the initiatives to hold AI accountable that are taking place and what should happen. Read on if you want to see what I think we need.

Here is what I think needs to be done:

  • We need more sustained criticism of AI applications that is based on informed testing along the lines of what POLITICO does.
  • In the tech field they are developing ways of testing for and mitigating bias. We need more of this and it needs to become standard practice. See the Google What-If Tool.
  • That should not blind us to deeper socio-cultural problems with AI. Ethical AI is not just a matter of making it fair. We need deep thinking about AI that goes beyond the biases of individual tools like Kate Crawford’s anatomyof.ai.
  • We need philosophers and sociologists to question the sudden fad of guidelines, checklists, and principles. Is this a healthy way to orient ourselves to AI?
  • Given how important data is to machine learning, we need to recognize the importance of the proper and ethical stewardship of data. The humanities and social sciences, especially library and information studies have a vital role to play in developing a culture of appropriate care for data. We should be designing data studies programmes that bring together those interested in the data and those interested in the algorithms.
  • We need best practices and frameworks for implementing ethical development and deployment of AI. Any organization that deploys AI should be expected to implement an ethics framework that operationalizes the principles they claim to abide by.
  • One place to start would be with a framework for ethical AI research. We academics who work at the intersection of computing and the humanities have a responsibility to imagine how development should be done. It isn’t good enough to leave it to our ethics boards to figure out if and when asked.
  • And yes, we need regulation that is backed up by robust enforcement. Voluntary industry codes of ethics will not be enough.