Why are women philosophers often erased from collective memory?

The history of ideas still struggles to remember the names of notable women philosophers. Mary Hesse is a salient example

Aeon has an important essay on Why are women philosophers often erased from collective memory? The essay argues that a number of important women philosophers have been lost (made absent) despite their importance including Mary Hesse. (You can see her Models and Analogies in Science through the Internet Archive.)

I read this after reading a chapter from Sara Ahmed’s Living a Feminist Life where Ahmed talks about citation practices and how disciplines exclude diverse work in different ways. She does a great job of confronting the various excuses people have for their bleached white citations. Poking around I find others have written on this including Victor Ray in Inside Higher Ed in an essay on The Racial Politics of Citation who references Richard Delgado’s The Imperial Scholar: Reflections on a Review of Civil Rights Literature from 1984.

What should be done about this? Obviously I’m not the best to suggest remedies, but here are some of the ideas that show up:

  • We need to commit to take the time to look at the works we read on a subject or for a project and to ask whose voice is missing. This shouldn’t be done at the end as a last minute fix, but during the ideation phase.
  • We should gather and confront data on our citational patterns from our publications. Knowing what you have done is better than not knowing.
  • We need to do the archaeological work to find and recover marginalized thinkers who have been left out and reflect on why they were left out. Then we need to promote them in teaching and research.
  • We should be willing to call out grants, articles, and proposals we review when it could make a difference.
  • We need to support work to translate thinkers whose work is not in English to balance the distribution of influence.
  • We need to be willing to view our field and its questions very differently.

Jeanna Matthews 

Jeanna Matthews from Clarkson College gave a great talk at our AI4Society Ethical Data and AI Salon on “Creating Incentives for Accountability and Iterative Improvement in Automated-Decision Making Systems.” She talked about a case regarding DNA matching software for criminal cases that she was involved in where they were able to actually get the code and show that the software would, under certain circumstances, generate false positives (where people would have their DNA matched to that from a crime scene when it shouldn’t have.)

As the title of her talk suggests, she used the concrete example to make the point that we need to create incentives for companies to test and improve their AIs. In particular she suggested that:

  1. Companies should be encouraged/regulated to invest some of the profit they make from the efficiencies from AI in improving the AI.
  2. That a better way to deal with the problems of AIs than weaving humans into the loop would be to set up independent human testers who test the AI and have a mechanism of redress. She pointed out how humans in the loop can get lazy, can be incentivized to agree with the AI and so on.
  3. We need regulation! No other approach will motivate companies to improve their AIs.

We had an interesting conversation around the question of how one could test point 2. Can we come up with a way of testing which approach is better?

She shared a link to a collection of links to most of the relevant papers and information: Northwestern Panel, March 10 2022.

The Universal Paperclips Game

Just finished playing the Universal Paperclips game which was surprisingly fun. It took me about 3.5 hours to get to sentience. The idea of the game is that you are an AI running a paperclip company and you make decisions and investments. The game was inspired by the philosopher Nick Bostrom‘s paperclip maximizer thought experiment which shows the risk that some harmless AI that controls the making of paperclips might evolve into an AGI (Artificial General Intelligence) and pose a risk to us. It might even convert all the resources of the universe into paperclips. The original thought experiment is in Bostrom’s paper Ethical Issues in Advanced Artificial Intelligence to illustrate the point that “Artificial intellects need not have humanlike motives.”

Human are rarely willing slaves, but there is nothing implausible about the idea of a superintelligence having as its supergoal to serve humanity or some particular human, with no desire whatsoever to revolt or to “liberate” itself. It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal. For better or worse, artificial intellects need not share our human motivational tendencies.

The game is rather addictive despite having a simple interface where all you can do is click on buttons making decisions. The decisions you get to make change over time and there are different panels that open up for exploration.

I learned about the game from an interesting blog entry by David Rosenthal on how It Isn’t About The Technology which is a response to enthusiasm about Web 3.0 and decentralized technologies (blockchain) and how they might save us, to which Rosenthal responds that it is isn’t about the technology.

One of the more interesting ideas that Rosenthal mentions is from Charles Stross’s keynote for the 34th Chaos Communications Congress to the effect that businesses are “slow AIs”. Corporations are machines that, like the paperclip maximizer, are self-optimizing and evolve until they are dangerous – something we are seeing with Google and Facebook.

Ottawa’s use of our location data raises big surveillance and privacy concerns

In order to track the pandemic, the Public Health Agency of Canada has been using location data without explicit and informed consent. Transparency is key to building and maintaining trust.

The Conversation has just published an article on  Ottawa’s use of our location data raises big surveillance and privacy concerns. This was written with a number of colleagues who were part of a research retreat (Dagstuhl) on Mobility Data Analysis: from Technical to Ethical.

We are at a moment when ethical principles are really not enough and we need to start talking about best practices in order to develop a culture of ethical use of data.

Value Sensitive Design and Dark Patterns

Dark Patterns are tricks used in websites and apps that make you buy or sign up for things that you didn’t mean to. The purpose of this site is to spread awareness and to shame companies that use them.

Reading about Value Sensitive Design I came across a link to Harry Brignul’s Dark Patterns. The site is about ways that web designers try to manipulate users. They have a Hall of Shame that is instructive and a Reading List if you want to follow up. It is interesting to see attempts to regulate certain patterns of deception.

Values are expressed and embedded in technology; they have real and often non-obvious impacts on users and society.

The alternative is introduce values and ethics into the design process. This is where Value Sensitive Design comes. As developed by Batya Friedman and colleagues it is an approach that includes methods for thinking-through the ethics of a project from the beginning. Some of the approaches mentioned in the article include:

  • Mapping out what a design will support, hinder or prevent.
  • Consider the stakeholders, especially those that may not have any say in the deployment or use of a technology.
  • Try to understand the underlying assumptions of technologies.
  • Broaden our gaze as to the effects of a technology on human experience.

They have even produced a set of Envisioning Cards for sale.

In Isolating Times, Can Robo-Pets Provide Comfort? – The New York Times

As seniors find themselves cut off from loved ones during the pandemic, some are turning to automated animals for company.

I’m reading about Virtual Assistants and thinking that in some ways the simplest VAs are the robopets that are being given to lonely elderly people who are isolated. See In Isolating Times, Can Robo-Pets Provide Comfort? Robo-cats and dogs (and even seals) seem to provide comfort the way a stuffed pet might. They aren’t even that smart, but can give comfort to an older person suffering from isolation.

These pets, like PARO (an expensive Japanese robotic seal seen above) or the much cheaper Joy for All pets, can possibly fool people with dementia. What are the ethics of this? Are we comfortable fooling people for their own good?

The Future of Digital Assistants Is Queer

AI assistants continue to reinforce sexist stereotypes, but queering these devices could help reimagine their relationship to gender altogether.

Wired has a nice article on how the The Future of Digital Assistants Is Queer. The article looks at the gendering of virtual assistants like Siri and how it is not enough to just offer male voices, but we need to queer the voices. It mentions the ethical issue of how voice conveys information like whether the VA is a bot or not.

The Proliferation of AI Ethics Principles: What’s Next?

The Proliferation of AI Ethics Principles: What’s Next?

The Montreal AI Ethics Institute has republished a nice article by Ravit Dotan, The Proliferation of AI Ethics Principles: What’s Next? Dotan starts by looking at some of the meta studies and then goes on to argue that we are unlikely to ever come up with a “unique set of core AI principles”, nor should we want to. She points out the lack of diversity in the sets we have. Different types of institutions will need different types of principles. She ends with these questions:

How do we navigate the proliferation of AI ethics principles? What should we use for regulation, for example? Should we seek to create new AI ethics principles which incorporate more perspectives? What if it doesn’t result in a unique set of principles, only increasing the multiplicity of principles? Is it possible to develop approaches for AI ethics governance that don’t rely on general AI ethics principles?

I am personally convinced that a more fruitful way forward is to start trading stories. These stories could take the form of incidents or cases or news or science fiction or even AI generated stories. We need to develop our ethical imagination. Hero Laird made this point in a talk on AI, Ethics and Law that was part of a salon we organize at AI4Society. They quoted from Thomas King’s The Truth About Stories to the effect that,

The truth about stories is that that’s all we are.

What stories do artificial intelligences tell themselves?

Artificial Intelligence Incident Database

I discovered the Artificial Intelligence Incident Database developed by the Partnership on AI. The Database contains reports on things that have gone wrong with AIs like the Australian Centerlink robodebt debacle.

The Incident Database was developed to help educate developers and encourage learning from mistakes. They have posted a paper to arXiv on Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database.

The Emissary and Harrow

Yoko Tawada’s new novel imagines a time in which language starts to vanish and the elderly care for weakened children.

I’ve just finished two brilliant and surreal works of post-climate fiction. One was Yoko Tawada’s The Emissary also called “The Last Children of Tokyo”. This novel follows a great grantfather who is healthy and active at over 100 years old as he raises his great grandson Mumei (“no name”) who is disabled by whatever disasters have washed over Japan. The country is also shutting down – entering another Edo period of isolation – making even language an issue. Unlike most post apocalyptic fiction this isn’t about what actually happened or about how people fight off the zombies; it is about imagining a strange isolated life where Japan tries for some sort of purity again. As such the novel comments on present, but aging Japan – a Japan that has forgotten the Fukushima disaster and is firing up their nuclear reactors again. At the end we find that Mumei might be chosen as an Emissary to be smuggled out of Japan to the outside world where the strange syndrome affecting youth can be studied.

For more see reviews After Disaster, Japan Seals Itself Off From the World in ‘The Emissary’ in the New York Times or Japan’s Isolation 2.0.

The second book is Harrow by Joy Williams. The novel takes place during the time when we deny there is anything wrong and depicts an America determined to keep on pretending nothing is happening. It is an America extended in harrowing fashion from our strange ignorance. The novel is in three parts and has religious undertones with the main character first called the lamb and then “Khristen.” The last book continually references Kafka’s The Hunter Gracchus, an obscure story about a boat carrying Gracchus that wanders, unable to make it across to the underworld. Likewise, America in this novel seems to wander, unable to make it across to some reality. The third book might be set in the time of judgement, but a Sartrean judgement with no exit where a child is judge and all that happens is more of the surreal same. As a reviewer points out, the “harrow” may be the torture instrument Kafka describes “In the Penal Colony” that writes your punishment on your back where you can’t quite see it. Likewise, we are writing our punishment on our earth where we choose not to see it.

See reviews like this one in the Harvard Review Online.