Lessons from the Robodebt debacle

How to avoid algorithmic decision-making mistakes: lessons from the Robodebt debacle

The University of Queensland has a research alliance looking at Trust, Ethics and Governance and one of the teams has recently published an interesting summary of How to avoid algorithmic decision-making mistakes: lessons from the Robodebt debacleThis is based on an open paper Algorithmic decision-making and system destructiveness: A case of automatic debt recovery. The web summary article is a good discussion of the Australian 2016 robodebt scandal where an unsupervised algorithm issued nasty debt collection letters to a large number of welfare recipients without adequate testing, accountability, or oversight. It is a classic case of a simplistic and poorly tested algorithm being rushed into service and having dramatic consequences (470,000 incorrectly issued debt notices). There is, as the article points out, also a political angle.

UQ’s experts argue that the government decision-makers responsible for rolling out the program exhibited tunnel vision. They framed welfare non-compliance as a major societal problem and saw welfare recipients as suspects of intentional fraud. Balancing the budget by cracking down on the alleged fraud had been one of the ruling party’s central campaign promises.

As such, there was a strong focus on meeting financial targets with little concern over the main mission of the welfare agency and potentially detrimental effects on individual citizens. This tunnel vision resulted in politicians’ and Centrelink management’s inability or unwillingness to critically evaluate and foresee the program’s impact, despite warnings. And there were warnings.

What I find even more disturbing is a point they make about how the system shifted the responsibility for establishing the existence of the debt from the government agency to the individual. The system essentially made speculative determinations and then issued bills. It was up to the individual to figure out whether or not they had really been overpaid or there was a miscalculation. Imagine if the police used predictive algorithms to fine people for possible speeding infractions who then had to prove they were innocent or pay the fine.

One can see the attractiveness of such a “fine first then ask” approach. It reduces government costs by shifting the onerous task of establishing the facts to the citizen. There is a good chance that many who were incorrectly billed will pay anyway as they are intimidated and don’t have the resources to contest the fine.

It should be noted that this was not the case of an AI gone bad. It was, from what I have read, a fairly simple system.

Street View Privacy

How do you feel about people being able to look at your house in Google Street View? Popular Science has an article by David Nield, on “How to hide your house on every map app: Stop people from peering at your place” (May 18, 2022).

This raises questions about where privacy starts and a right to look or know stops. Can I not walk down a street and look at the faces of houses? Why then should I not be able to look at the face on Street View and other similar technologies? What about the satellite view? Do people have the right to see into my back yard from above?

This is a similar issue, though less fraught, as face databases. What rights do I have to my face? How would those rights connect to laws about Name, Image and Likeness (NIL) (or rights of publicity) which became an issue recently in amateur sports in the US. As for Canada, Rights of Publicity are complex and vary from province to province, but there is generally a recognition that:

  • People should have the right “to control the commercial use of name, image, likeness and other unequivocal aspects of one’s identity (eg, the distinct sound of someone’s voice).” (See Lexology article)
  • At the same time there is recognition that NIL can be used to provide legitimate information to the public.

Returning to the blurring of your house facade in Street View; I’m guessing the main reason the companies provide this is for security for people in sensitive positions or people being stalked.

Health agency tracked Canadians’ trips to liquor store via phones during pandemic

The report reveals PHAC was able to view a detailed snapshot of people’s behaviour, including grocery store visits, gatherings with family and friends, time…

The National Post is reporting about the Public Health Agency of Canada and their use of mobility data that a group of us wrote about in The Conversation (Canada). The story goes into more detail about how Health agency tracked Canadians’ trips to liquor store via phones during pandemicThe government provided one of the reports commissioned by PHAC from BlueDot to the House of Commons. The Ethics Committee report discussing what happened and making recommendations is here.

Giant, free index to world’s research papers released online

Catalogue of billions of phrases from 107 million papers could ease computerized searching of the literature.

From Ian I learned about a Giant, free index to world’s research papers released online. The General Index, as it is called, makes ngrams of up to 5 words available with pointers to relevant journal articles.

The massive index is available from the Internet Archive here. Here is how it is described.

Public Resource, a registered nonprofit organization based in California, has created a General Index to scientific journals. The General Index consists of a listing of n-grams, from unigrams to five-grams, extracted from 107 million journal articles.

The General Index is non-consumptive, in that the underlying articles are not released, and it is transformative in that the release consists of the extraction of facts that are derived from that underlying corpus. The General Index is available for free download with no restrictions on use. This is an initial release, and the hope is to improve the quality of text extraction, broaden the scope of the underlying corpus, provide more sophisticated metrics associated with terms, and other enhancements.

Access to the full corpus of scholarly journals is an essential facility to the practice of science in our modern world. The General Index is an invaluable utility for researchers who wish to search for articles about plants, chemicals, genes, proteins, materials, geographical locations, and other entities of interest. The General Index allows scholars and students all over the world to perform specialized and customized searches within the scope of their disciplines and research over the full corpus.

Access to knowledge is a human right and the increase and diffusion of knowledge depends on our ability to stand on the shoulders of giants. We applaud the release of the General Index and look forward to the progress of this worthy endeavor.

There must be some neat uses of this. I wonder if someone like Google might make a diachronic viewer similar to their Google Books Ngram Viewer available?

Jeanna Matthews 

Jeanna Matthews from Clarkson College gave a great talk at our AI4Society Ethical Data and AI Salon on “Creating Incentives for Accountability and Iterative Improvement in Automated-Decision Making Systems.” She talked about a case regarding DNA matching software for criminal cases that she was involved in where they were able to actually get the code and show that the software would, under certain circumstances, generate false positives (where people would have their DNA matched to that from a crime scene when it shouldn’t have.)

As the title of her talk suggests, she used the concrete example to make the point that we need to create incentives for companies to test and improve their AIs. In particular she suggested that:

  1. Companies should be encouraged/regulated to invest some of the profit they make from the efficiencies from AI in improving the AI.
  2. That a better way to deal with the problems of AIs than weaving humans into the loop would be to set up independent human testers who test the AI and have a mechanism of redress. She pointed out how humans in the loop can get lazy, can be incentivized to agree with the AI and so on.
  3. We need regulation! No other approach will motivate companies to improve their AIs.

We had an interesting conversation around the question of how one could test point 2. Can we come up with a way of testing which approach is better?

She shared a link to a collection of links to most of the relevant papers and information: Northwestern Panel, March 10 2022.

Replication, Repetition, or Revivification

A short essay I wrote with Stéfan Sinclair on “Recapitulation, Replication, Reanalysis, Repetition, or Revivification” is now up in preprint form. The essay is part of a longer work on “Anatomy of tools: A closer look at ‘textual DH’ methodologies.” The longer work is a set of interventions looking at text tools. These came out of a ADHO SIG-DLS (Digital Literary Studies) workshop that took place in Utrecht in July 2019.

Our intervention at the workshop had the original title “Zombies as Tools: Revivification in Computer Assisted Interpretation” and concentrated on practices of exploring old tools – a sort of revivification or bringing back to life of zombie tools.

The full paper should be published soon by DHQ.

Ottawa’s use of our location data raises big surveillance and privacy concerns

In order to track the pandemic, the Public Health Agency of Canada has been using location data without explicit and informed consent. Transparency is key to building and maintaining trust.

The Conversation has just published an article on  Ottawa’s use of our location data raises big surveillance and privacy concerns. This was written with a number of colleagues who were part of a research retreat (Dagstuhl) on Mobility Data Analysis: from Technical to Ethical.

We are at a moment when ethical principles are really not enough and we need to start talking about best practices in order to develop a culture of ethical use of data.

The Future of Digital Assistants Is Queer

AI assistants continue to reinforce sexist stereotypes, but queering these devices could help reimagine their relationship to gender altogether.

Wired has a nice article on how the The Future of Digital Assistants Is Queer. The article looks at the gendering of virtual assistants like Siri and how it is not enough to just offer male voices, but we need to queer the voices. It mentions the ethical issue of how voice conveys information like whether the VA is a bot or not.

The Proliferation of AI Ethics Principles: What’s Next?

The Proliferation of AI Ethics Principles: What’s Next?

The Montreal AI Ethics Institute has republished a nice article by Ravit Dotan, The Proliferation of AI Ethics Principles: What’s Next? Dotan starts by looking at some of the meta studies and then goes on to argue that we are unlikely to ever come up with a “unique set of core AI principles”, nor should we want to. She points out the lack of diversity in the sets we have. Different types of institutions will need different types of principles. She ends with these questions:

How do we navigate the proliferation of AI ethics principles? What should we use for regulation, for example? Should we seek to create new AI ethics principles which incorporate more perspectives? What if it doesn’t result in a unique set of principles, only increasing the multiplicity of principles? Is it possible to develop approaches for AI ethics governance that don’t rely on general AI ethics principles?

I am personally convinced that a more fruitful way forward is to start trading stories. These stories could take the form of incidents or cases or news or science fiction or even AI generated stories. We need to develop our ethical imagination. Hero Laird made this point in a talk on AI, Ethics and Law that was part of a salon we organize at AI4Society. They quoted from Thomas King’s The Truth About Stories to the effect that,

The truth about stories is that that’s all we are.

What stories do artificial intelligences tell themselves?