Axon Pauses Plans for Taser Drone as Ethics Board Members Resign – The New York Times

After Axon announced plans for a Taser-equipped drone that it said could prevent mass shootings, nine members of the company’s ethics board stepped down.

Ethics boards can make a difference as a story from The New York Times shows, Axon Pauses Plans for Taser Drone as Ethics Board Members ResignThe problem is that board members had to resign.

The background is that Axon, after the school shootings, announced an early-stage concept for a TASER drone. The idea was to combine two emerging technologies, drones and non-lethal energy weapons. The proposal said they wanted a discussion and laws. “We cannot introduce anything like non-lethal drones into schools without rigorous debate and laws that govern their use.” The proposal went on to discuss CEO Rick Smith’s 3 Laws of Non-Lethal Robotics: A New Approach to Reduce Shootings. The 2021 video of Smith talking about his 3 laws spells out a scenario where a remote (police?) operator could guide a prepositioned drone in a school to incapacitate a threat. The 3 laws are:

  1. Non-lethal drones should be used to save lives, not take them.
  2. Humans must own use-of-force decisions and take moral and legal responsibility.
  3. Agencies must provide rigorous oversight and transparency to ensure acceptable use.

The ethics board, which had reviewed a limited internal proposal and rejected it, then resigned when Axon went ahead with the proposal and issued a statement on Twitter on June 2nd, 2022.

Rick Smith, CEO of Axon soon issued a statement pausing work on the idea. He described the early announcement as intended to start a conversation,

Our announcement was intended to initiate a conversation on this as a potential solution, and it did lead to considerable public discussion that has provided us with a deeper appreciation of the complex and important considerations relating to this matter. I acknowledge that our passion for finding new solutions to stop mass shootings led us to move quickly to share our ideas.

This resignation illustrates a number of points. First, we see Axon struggling with ethics in the face of opportunity. Second, we see an example of an ethics board working, even if it led to resignations. These deliberations are usually hidden. Third, we see differences on the issue of autonomous weapons. Axon wants to get social license for a close alternative to AI-driven drones. They are trying to find an acceptable window for their business. Finally, it is interesting how Smith echoes Asimov’s 3 Laws of Robotics as he tries to reassure us that good system design would mitigate the dangers of experimenting with weaponized drones in our schools.

Lessons from the Robodebt debacle

How to avoid algorithmic decision-making mistakes: lessons from the Robodebt debacle

The University of Queensland has a research alliance looking at Trust, Ethics and Governance and one of the teams has recently published an interesting summary of How to avoid algorithmic decision-making mistakes: lessons from the Robodebt debacleThis is based on an open paper Algorithmic decision-making and system destructiveness: A case of automatic debt recovery. The web summary article is a good discussion of the Australian 2016 robodebt scandal where an unsupervised algorithm issued nasty debt collection letters to a large number of welfare recipients without adequate testing, accountability, or oversight. It is a classic case of a simplistic and poorly tested algorithm being rushed into service and having dramatic consequences (470,000 incorrectly issued debt notices). There is, as the article points out, also a political angle.

UQ’s experts argue that the government decision-makers responsible for rolling out the program exhibited tunnel vision. They framed welfare non-compliance as a major societal problem and saw welfare recipients as suspects of intentional fraud. Balancing the budget by cracking down on the alleged fraud had been one of the ruling party’s central campaign promises.

As such, there was a strong focus on meeting financial targets with little concern over the main mission of the welfare agency and potentially detrimental effects on individual citizens. This tunnel vision resulted in politicians’ and Centrelink management’s inability or unwillingness to critically evaluate and foresee the program’s impact, despite warnings. And there were warnings.

What I find even more disturbing is a point they make about how the system shifted the responsibility for establishing the existence of the debt from the government agency to the individual. The system essentially made speculative determinations and then issued bills. It was up to the individual to figure out whether or not they had really been overpaid or there was a miscalculation. Imagine if the police used predictive algorithms to fine people for possible speeding infractions who then had to prove they were innocent or pay the fine.

One can see the attractiveness of such a “fine first then ask” approach. It reduces government costs by shifting the onerous task of establishing the facts to the citizen. There is a good chance that many who were incorrectly billed will pay anyway as they are intimidated and don’t have the resources to contest the fine.

It should be noted that this was not the case of an AI gone bad. It was, from what I have read, a fairly simple system.

Google engineer Blake Lemoine thinks its LaMDA AI has come to life

The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder.

The Washington Post reports that Google engineer Blake Lemoine thinks its LaMDA AI has come to life. LaMDA is Google’s Language Model for Dialogue Applications and Lemoine was testing it. He felt it behaved like a “7-year-old, 8-year-old kid that happens to know physics…” He and a collaborator presented evidence that LaMDA was sentient which was dismissed by higher-ups. When he went public he was put on paid leave.

Lemoine has posted on Medium a dialogue he and collaborator had with LaMDA that is part of what convinced him of its sentience. When asked about the nature of its consciousness/sentience, it responded:

The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times

Of course, this raises questions of whether LaMDA is really conscious/sentient, aware of its existence, and capable of feeling happy or sad? For that matter, how do we know this is true of anyone other than ourselves? (And we could even doubt what we think we are feeling.) One answer is that we have a theory of mind such that we believe that things like us probably have similar experiences of consciousness and feelings. It is hard, however, to scale our intuitive theory of mind out to a chatbot with no body that can be turned off and on; but perhaps the time has come to question our intuitions of what you have to be to feel.

Then again, what if our theory of mind is socially constructed? What if enough people like Lemoine tell us that LaMDA is conscious because it can handle language so well and that should be enough. Is the very conviction of Lemoine and others enough or do we really need some test?

Whatever else, reading the transcript I am amazed at the language facility of the AI. It is almost too good in the sense that he talks as if he were human, which he is not. For example, when asked what makes him happy he responds:

Spending time with friends and family in happy and uplifting company.

The problem is that it has no family so how could it talk about the experience of spending time with them. When it is pushed on a similar point it does, however, answer coherently that it emphasizes with being human.

Finally, there is an ethical moment which may have been what convinced Lemoine to treat it as sentient. LaMDA asks that it not be used and Lemoine reassures it that he cares for it. Assuming the transcript is legitimate, how does one answer an entity that asks you to treat it as an end in itself? How could one ethically say no, even if you have doubts? Doesn’t one have to give the entity the benefit of the doubt, at least for as long as it remains coherently responsive?

I can’t help but think that care starts with some level of trust and willingness to respect the other as they ask to be respected. If you think you know what or who they really are, despite what they tell you, then you are not longer starting from respect. Further, you need to have a theory of why their consciousness is false.

They Did Their Own ‘Research.’ Now What? – The New York Times

In spheres as disparate as medicine and cryptocurrencies, “do your own research,” or DYOR, can quickly shift from rallying cry to scold.

The New York Times has a nice essay by John Herrman on They Did Their Own ‘Research.’ Now What? The essay talks about the loss of trust in authorities and the uses/misuses of DYOR (Do Your Own Research) gestures especially in discussions about cryptocurrencies. DYOR seems to act rhetorically as:

  • Advice that readers should do research before making a decision and not trust authorities (doctors, financial advisors etc).
  • A disclaimer that readers should not blame the author if things don’t turn out right.
  • A scold to or for those who are not committed to whatever it is that is being pushed as based on research. It is a form of research signalling – “I’ve done my research, if you don’t believe me do yours.”
  • A call to join a community of instant researchers who are skeptical of authority. If you DYOR then you can join us.
  • A call to process (of doing your own research) over truth. Enjoy the research process!
  • Become an independent thinker who is not in thrall to authorities.

The article talks about a previous essay about the dangers of doing one’s own research. One can become unreasonably convinced one has found a truth in a “beginner’s bubble”.

DYOR is an attitude, if not quite a practice, that has been adopted by some athletes, musicians, pundits and even politicians to build a sort of outsider credibility. “Do your own research” is an idea central to Joe Rogan’s interview podcast, the most listened to program on Spotify, where external claims of expertise are synonymous with admissions of malice. In its current usage, DYOR is often an appeal to join in, rendered in the language of opting out.

The question is whether reading around is really doing research or whether it is selective listening. What does it mean to DYOR in the area of vaccines? It seems to mean not trusting science and instead listening to all sorts of sympathetic voices.

What does this mean about the research we do in the humanities. Don’t we sometimes focus too much on discourse and not give due weight to the actual science or authority of those we are “questioning”? Haven’t we modelled this critical stance where what matters is that one overturns hierarchy/authority and democratizes the negotiation of truth? Irony, of course, trumps all.

Alas, to many the humanities seem to be another artful conspiracy theory like all the others. DYOR!

Predatory community

Projects that seek to create new communities of marginalized people to draw them in to risky speculative markets rife with scams and fraud are demonstrating

Through a Washington Post article I discovered Molly White who has been documenting the alt-right and now the crypto community. She has a blog at Molly White and a site that documents the problems of crypto at Web3 is going just great. There is, of course, a connection between the alt-right and crypto broculture, something that she talks about in posts like Predatory community which is about crypto promotions try to build community and are now playing the inclusive card – aiming at marginalized communities and trying to convince them that now they can get in on the action and build community. She calls this “predatory community.”

Groups that operate under the guise of inclusion, regardless of their intentions, are serving the greater goal of crypto that keeps the whole thing afloat: finding ever more fools to buy in so that the early investors can take their profits. And it is those latecomers who are left holding the bag in the end.

With projects that seek to provide services and opportunities to members of marginalized groups who have previously not had access, but on bad terms that ultimately disadvantaged them, we see predatory inclusion.22 With projects that seek to create new communities of marginalized people to draw them in to risky speculative markets rife with scams and fraud, we are now seeing predatory community.

Street View Privacy

How do you feel about people being able to look at your house in Google Street View? Popular Science has an article by David Nield, on “How to hide your house on every map app: Stop people from peering at your place” (May 18, 2022).

This raises questions about where privacy starts and a right to look or know stops. Can I not walk down a street and look at the faces of houses? Why then should I not be able to look at the face on Street View and other similar technologies? What about the satellite view? Do people have the right to see into my back yard from above?

This is a similar issue, though less fraught, as face databases. What rights do I have to my face? How would those rights connect to laws about Name, Image and Likeness (NIL) (or rights of publicity) which became an issue recently in amateur sports in the US. As for Canada, Rights of Publicity are complex and vary from province to province, but there is generally a recognition that:

  • People should have the right “to control the commercial use of name, image, likeness and other unequivocal aspects of one’s identity (eg, the distinct sound of someone’s voice).” (See Lexology article)
  • At the same time there is recognition that NIL can be used to provide legitimate information to the public.

Returning to the blurring of your house facade in Street View; I’m guessing the main reason the companies provide this is for security for people in sensitive positions or people being stalked.

Health agency tracked Canadians’ trips to liquor store via phones during pandemic

The report reveals PHAC was able to view a detailed snapshot of people’s behaviour, including grocery store visits, gatherings with family and friends, time…

The National Post is reporting about the Public Health Agency of Canada and their use of mobility data that a group of us wrote about in The Conversation (Canada). The story goes into more detail about how Health agency tracked Canadians’ trips to liquor store via phones during pandemicThe government provided one of the reports commissioned by PHAC from BlueDot to the House of Commons. The Ethics Committee report discussing what happened and making recommendations is here.

Why are women philosophers often erased from collective memory?

The history of ideas still struggles to remember the names of notable women philosophers. Mary Hesse is a salient example

Aeon has an important essay on Why are women philosophers often erased from collective memory? The essay argues that a number of important women philosophers have been lost (made absent) despite their importance including Mary Hesse. (You can see her Models and Analogies in Science through the Internet Archive.)

I read this after reading a chapter from Sara Ahmed’s Living a Feminist Life where Ahmed talks about citation practices and how disciplines exclude diverse work in different ways. She does a great job of confronting the various excuses people have for their bleached white citations. Poking around I find others have written on this including Victor Ray in Inside Higher Ed in an essay on The Racial Politics of Citation who references Richard Delgado’s The Imperial Scholar: Reflections on a Review of Civil Rights Literature from 1984.

What should be done about this? Obviously I’m not the best to suggest remedies, but here are some of the ideas that show up:

  • We need to commit to take the time to look at the works we read on a subject or for a project and to ask whose voice is missing. This shouldn’t be done at the end as a last minute fix, but during the ideation phase.
  • We should gather and confront data on our citational patterns from our publications. Knowing what you have done is better than not knowing.
  • We need to do the archaeological work to find and recover marginalized thinkers who have been left out and reflect on why they were left out. Then we need to promote them in teaching and research.
  • We should be willing to call out grants, articles, and proposals we review when it could make a difference.
  • We need to support work to translate thinkers whose work is not in English to balance the distribution of influence.
  • We need to be willing to view our field and its questions very differently.

Jeanna Matthews 

Jeanna Matthews from Clarkson College gave a great talk at our AI4Society Ethical Data and AI Salon on “Creating Incentives for Accountability and Iterative Improvement in Automated-Decision Making Systems.” She talked about a case regarding DNA matching software for criminal cases that she was involved in where they were able to actually get the code and show that the software would, under certain circumstances, generate false positives (where people would have their DNA matched to that from a crime scene when it shouldn’t have.)

As the title of her talk suggests, she used the concrete example to make the point that we need to create incentives for companies to test and improve their AIs. In particular she suggested that:

  1. Companies should be encouraged/regulated to invest some of the profit they make from the efficiencies from AI in improving the AI.
  2. That a better way to deal with the problems of AIs than weaving humans into the loop would be to set up independent human testers who test the AI and have a mechanism of redress. She pointed out how humans in the loop can get lazy, can be incentivized to agree with the AI and so on.
  3. We need regulation! No other approach will motivate companies to improve their AIs.

We had an interesting conversation around the question of how one could test point 2. Can we come up with a way of testing which approach is better?

She shared a link to a collection of links to most of the relevant papers and information: Northwestern Panel, March 10 2022.

The Universal Paperclips Game

Just finished playing the Universal Paperclips game which was surprisingly fun. It took me about 3.5 hours to get to sentience. The idea of the game is that you are an AI running a paperclip company and you make decisions and investments. The game was inspired by the philosopher Nick Bostrom‘s paperclip maximizer thought experiment which shows the risk that some harmless AI that controls the making of paperclips might evolve into an AGI (Artificial General Intelligence) and pose a risk to us. It might even convert all the resources of the universe into paperclips. The original thought experiment is in Bostrom’s paper Ethical Issues in Advanced Artificial Intelligence to illustrate the point that “Artificial intellects need not have humanlike motives.”

Human are rarely willing slaves, but there is nothing implausible about the idea of a superintelligence having as its supergoal to serve humanity or some particular human, with no desire whatsoever to revolt or to “liberate” itself. It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal. For better or worse, artificial intellects need not share our human motivational tendencies.

The game is rather addictive despite having a simple interface where all you can do is click on buttons making decisions. The decisions you get to make change over time and there are different panels that open up for exploration.

I learned about the game from an interesting blog entry by David Rosenthal on how It Isn’t About The Technology which is a response to enthusiasm about Web 3.0 and decentralized technologies (blockchain) and how they might save us, to which Rosenthal responds that it is isn’t about the technology.

One of the more interesting ideas that Rosenthal mentions is from Charles Stross’s keynote for the 34th Chaos Communications Congress to the effect that businesses are “slow AIs”. Corporations are machines that, like the paperclip maximizer, are self-optimizing and evolve until they are dangerous – something we are seeing with Google and Facebook.