Lessons from the Robodebt debacle

How to avoid algorithmic decision-making mistakes: lessons from the Robodebt debacle

The University of Queensland has a research alliance looking at Trust, Ethics and Governance and one of the teams has recently published an interesting summary of How to avoid algorithmic decision-making mistakes: lessons from the Robodebt debacleThis is based on an open paper Algorithmic decision-making and system destructiveness: A case of automatic debt recovery. The web summary article is a good discussion of the Australian 2016 robodebt scandal where an unsupervised algorithm issued nasty debt collection letters to a large number of welfare recipients without adequate testing, accountability, or oversight. It is a classic case of a simplistic and poorly tested algorithm being rushed into service and having dramatic consequences (470,000 incorrectly issued debt notices). There is, as the article points out, also a political angle.

UQ’s experts argue that the government decision-makers responsible for rolling out the program exhibited tunnel vision. They framed welfare non-compliance as a major societal problem and saw welfare recipients as suspects of intentional fraud. Balancing the budget by cracking down on the alleged fraud had been one of the ruling party’s central campaign promises.

As such, there was a strong focus on meeting financial targets with little concern over the main mission of the welfare agency and potentially detrimental effects on individual citizens. This tunnel vision resulted in politicians’ and Centrelink management’s inability or unwillingness to critically evaluate and foresee the program’s impact, despite warnings. And there were warnings.

What I find even more disturbing is a point they make about how the system shifted the responsibility for establishing the existence of the debt from the government agency to the individual. The system essentially made speculative determinations and then issued bills. It was up to the individual to figure out whether or not they had really been overpaid or there was a miscalculation. Imagine if the police used predictive algorithms to fine people for possible speeding infractions who then had to prove they were innocent or pay the fine.

One can see the attractiveness of such a “fine first then ask” approach. It reduces government costs by shifting the onerous task of establishing the facts to the citizen. There is a good chance that many who were incorrectly billed will pay anyway as they are intimidated and don’t have the resources to contest the fine.

It should be noted that this was not the case of an AI gone bad. It was, from what I have read, a fairly simple system.

Google engineer Blake Lemoine thinks its LaMDA AI has come to life

The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder.

The Washington Post reports that Google engineer Blake Lemoine thinks its LaMDA AI has come to life. LaMDA is Google’s Language Model for Dialogue Applications and Lemoine was testing it. He felt it behaved like a “7-year-old, 8-year-old kid that happens to know physics…” He and a collaborator presented evidence that LaMDA was sentient which was dismissed by higher-ups. When he went public he was put on paid leave.

Lemoine has posted on Medium a dialogue he and collaborator had with LaMDA that is part of what convinced him of its sentience. When asked about the nature of its consciousness/sentience, it responded:

The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times

Of course, this raises questions of whether LaMDA is really conscious/sentient, aware of its existence, and capable of feeling happy or sad? For that matter, how do we know this is true of anyone other than ourselves? (And we could even doubt what we think we are feeling.) One answer is that we have a theory of mind such that we believe that things like us probably have similar experiences of consciousness and feelings. It is hard, however, to scale our intuitive theory of mind out to a chatbot with no body that can be turned off and on; but perhaps the time has come to question our intuitions of what you have to be to feel.

Then again, what if our theory of mind is socially constructed? What if enough people like Lemoine tell us that LaMDA is conscious because it can handle language so well and that should be enough. Is the very conviction of Lemoine and others enough or do we really need some test?

Whatever else, reading the transcript I am amazed at the language facility of the AI. It is almost too good in the sense that he talks as if he were human, which he is not. For example, when asked what makes him happy he responds:

Spending time with friends and family in happy and uplifting company.

The problem is that it has no family so how could it talk about the experience of spending time with them. When it is pushed on a similar point it does, however, answer coherently that it emphasizes with being human.

Finally, there is an ethical moment which may have been what convinced Lemoine to treat it as sentient. LaMDA asks that it not be used and Lemoine reassures it that he cares for it. Assuming the transcript is legitimate, how does one answer an entity that asks you to treat it as an end in itself? How could one ethically say no, even if you have doubts? Doesn’t one have to give the entity the benefit of the doubt, at least for as long as it remains coherently responsive?

I can’t help but think that care starts with some level of trust and willingness to respect the other as they ask to be respected. If you think you know what or who they really are, despite what they tell you, then you are not longer starting from respect. Further, you need to have a theory of why their consciousness is false.

The Internet is Made of Demons

The Internet Is Not What You Think It Is is not what you think it is.

Sam Kriss has written a longish review essay on Justin E.H. Smith’s The Internet is Not What You Think It Is with the title The Internet is Made of Demons. In the first part Kriss writes about how the internet is possessing us and training us,

Everything you say online is subject to an instant system of rewards. Every platform comes with metrics; you can precisely quantify how well-received your thoughts are by how many likes or shares or retweets they receive. For almost everyone, the game is difficult to resist: they end up trying to say the things that the machine will like. For all the panic over online censorship, this stuff is far more destructive. You have no free speech—not because someone might ban your account, but because there’s a vast incentive structure in place that constantly channels your speech in certain directions. And unlike overt censorship, it’s not a policy that could ever be changed, but a pure function of the connectivity of the internet itself. This might be why so much writing that comes out of the internet is so unbearably dull, cycling between outrage and mockery, begging for clicks, speaking the machine back into its own bowels.

Then Kriss makes the case that the Internet is made of demons – not in a paranoid conspiracy sort of way, but in a historical sense that ideas like the internet often involve demons,

Trithemius invented the internet in a flight of mystical fancy to cover up what he was really doing, which was inventing the internet. Demons disguise themselves as technology, technology disguises itself as demons; both end up being one and the same thing.

In the last section Kriss turns to Justin E.H. Smith’s book and reflects on how the book (unlike the preceding essay “It’s All Over”) are not what the internet expects. The internet, for Smith, likes critical essays that present the internet as a “rupture” – something like the industrial revolution, but for language – while in fact the internet in some form (like demons) has been with us all along. Kriss doesn’t agree. For him the idea of the internet might be old, but what we have now is still a transformation of an old nightmare.

If there are intimations of the internet running throughout history, it might be because it’s a nightmare that has haunted all societies. People have always been aware of the internet: once, it was the loneliness lurking around the edge of the camp, the terrible possibility of a system of signs that doesn’t link people together, but wrenches them apart instead. In the end, what I can’t get away from are the demons. Whenever people imagined the internet, demons were always there.

Jeanna Matthews 

Jeanna Matthews from Clarkson College gave a great talk at our AI4Society Ethical Data and AI Salon on “Creating Incentives for Accountability and Iterative Improvement in Automated-Decision Making Systems.” She talked about a case regarding DNA matching software for criminal cases that she was involved in where they were able to actually get the code and show that the software would, under certain circumstances, generate false positives (where people would have their DNA matched to that from a crime scene when it shouldn’t have.)

As the title of her talk suggests, she used the concrete example to make the point that we need to create incentives for companies to test and improve their AIs. In particular she suggested that:

  1. Companies should be encouraged/regulated to invest some of the profit they make from the efficiencies from AI in improving the AI.
  2. That a better way to deal with the problems of AIs than weaving humans into the loop would be to set up independent human testers who test the AI and have a mechanism of redress. She pointed out how humans in the loop can get lazy, can be incentivized to agree with the AI and so on.
  3. We need regulation! No other approach will motivate companies to improve their AIs.

We had an interesting conversation around the question of how one could test point 2. Can we come up with a way of testing which approach is better?

She shared a link to a collection of links to most of the relevant papers and information: Northwestern Panel, March 10 2022.

The Universal Paperclips Game

Just finished playing the Universal Paperclips game which was surprisingly fun. It took me about 3.5 hours to get to sentience. The idea of the game is that you are an AI running a paperclip company and you make decisions and investments. The game was inspired by the philosopher Nick Bostrom‘s paperclip maximizer thought experiment which shows the risk that some harmless AI that controls the making of paperclips might evolve into an AGI (Artificial General Intelligence) and pose a risk to us. It might even convert all the resources of the universe into paperclips. The original thought experiment is in Bostrom’s paper Ethical Issues in Advanced Artificial Intelligence to illustrate the point that “Artificial intellects need not have humanlike motives.”

Human are rarely willing slaves, but there is nothing implausible about the idea of a superintelligence having as its supergoal to serve humanity or some particular human, with no desire whatsoever to revolt or to “liberate” itself. It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal. For better or worse, artificial intellects need not share our human motivational tendencies.

The game is rather addictive despite having a simple interface where all you can do is click on buttons making decisions. The decisions you get to make change over time and there are different panels that open up for exploration.

I learned about the game from an interesting blog entry by David Rosenthal on how It Isn’t About The Technology which is a response to enthusiasm about Web 3.0 and decentralized technologies (blockchain) and how they might save us, to which Rosenthal responds that it is isn’t about the technology.

One of the more interesting ideas that Rosenthal mentions is from Charles Stross’s keynote for the 34th Chaos Communications Congress to the effect that businesses are “slow AIs”. Corporations are machines that, like the paperclip maximizer, are self-optimizing and evolve until they are dangerous – something we are seeing with Google and Facebook.

Ottawa’s use of our location data raises big surveillance and privacy concerns

In order to track the pandemic, the Public Health Agency of Canada has been using location data without explicit and informed consent. Transparency is key to building and maintaining trust.

The Conversation has just published an article on  Ottawa’s use of our location data raises big surveillance and privacy concerns. This was written with a number of colleagues who were part of a research retreat (Dagstuhl) on Mobility Data Analysis: from Technical to Ethical.

We are at a moment when ethical principles are really not enough and we need to start talking about best practices in order to develop a culture of ethical use of data.

Lost Gustav Klimt Paintings Destroyed in Fire Digitally Restored (by AI)

Black and White and AI Coloured versions of Philosophy
Philosophy by Klimt

Google Arts & Culture launched a hub for all things Gustav Klimt today, which include digital restorations of three lost paintings.

ARTnews, among other places reports that Lost Gustav Klimt Paintings Destroyed in Fire Digitally RestoredThe three faculties (Medicine, Philosophy, and Jurisprudence) painted for the University of Vienna were destroyed in a fire leaving only black and white photographs. Now Google has helped recreate what the three paintings might have looked like using AI as part of a Google Arts and Culture site on Klimt. You can read about the history of the three faculties here.

Whether in black and white, or in colour, the painting of Philosophy (above) is stunning. The original in colour would have been stunning, especially as it was 170 by 118 inches. Philosophy is represented by the Sphinx-like figure merging with the universe. To one side is a stream of people from the young to the old who hold their heads in confusion. At the bottom is a woman, comparable to the woman in the painting of Medicine, who might be an inspired philosopher looking through us.

Value Sensitive Design and Dark Patterns

Dark Patterns are tricks used in websites and apps that make you buy or sign up for things that you didn’t mean to. The purpose of this site is to spread awareness and to shame companies that use them.

Reading about Value Sensitive Design I came across a link to Harry Brignul’s Dark Patterns. The site is about ways that web designers try to manipulate users. They have a Hall of Shame that is instructive and a Reading List if you want to follow up. It is interesting to see attempts to regulate certain patterns of deception.

Values are expressed and embedded in technology; they have real and often non-obvious impacts on users and society.

The alternative is introduce values and ethics into the design process. This is where Value Sensitive Design comes. As developed by Batya Friedman and colleagues it is an approach that includes methods for thinking-through the ethics of a project from the beginning. Some of the approaches mentioned in the article include:

  • Mapping out what a design will support, hinder or prevent.
  • Consider the stakeholders, especially those that may not have any say in the deployment or use of a technology.
  • Try to understand the underlying assumptions of technologies.
  • Broaden our gaze as to the effects of a technology on human experience.

They have even produced a set of Envisioning Cards for sale.

In Isolating Times, Can Robo-Pets Provide Comfort? – The New York Times

As seniors find themselves cut off from loved ones during the pandemic, some are turning to automated animals for company.

I’m reading about Virtual Assistants and thinking that in some ways the simplest VAs are the robopets that are being given to lonely elderly people who are isolated. See In Isolating Times, Can Robo-Pets Provide Comfort? Robo-cats and dogs (and even seals) seem to provide comfort the way a stuffed pet might. They aren’t even that smart, but can give comfort to an older person suffering from isolation.

These pets, like PARO (an expensive Japanese robotic seal seen above) or the much cheaper Joy for All pets, can possibly fool people with dementia. What are the ethics of this? Are we comfortable fooling people for their own good?

The Future of Digital Assistants Is Queer

AI assistants continue to reinforce sexist stereotypes, but queering these devices could help reimagine their relationship to gender altogether.

Wired has a nice article on how the The Future of Digital Assistants Is Queer. The article looks at the gendering of virtual assistants like Siri and how it is not enough to just offer male voices, but we need to queer the voices. It mentions the ethical issue of how voice conveys information like whether the VA is a bot or not.