Geoff McMaster of the Folio (U of A’s news site) wrote a nice article about how Making AI accountable easier said than done, says U of A expert. The article quotes me on on accountability and artificial intelligence. What we didn’t really talk about is forms of accountability for automata including:
- Explainability – Can someone get an explanation as to how and why an AI made a decision that affects them? If people can get an explanation that they can understand then they can presumably take remedial action and hold someone or some organization accountable.
- Transparency – Is an automated decision making process fully transparent so that it can be tested, studied and critiqued? Transparency is often seen as a higher bar for an AI to meet than explainability.
- Responsibility – This is the old computer ethics question that focuses on who can be held responsible if a computer or AI harms someone. Who or what is held to account?
In all these cases there is a presumption of process both to determine transparency/responsibility and to then punish or correct for problems. Otherwise people will have no real recourse.