Excavating AI

The training sets of labeled images that are ubiquitous in contemporary computer vision and AI are built on a foundation of unsubstantiated and unstable epistemological and metaphysical assumptions about the nature of images, labels, categorization, and representation. Furthermore, those epistemological and metaphysical assumptions hark back to historical approaches where people were visually assessed and classified as a tool of oppression and race science.

Excavating AI is an important paper by Kate Crawford and Trevor Paglen that looks at “The Politics of Image in Machine Learning Training.” They look at different ways that politics and assumptions can creep into training datasets that are (were) widely used in AI.

  • There is the overall taxonomy used to annotate (label) the images
  • There are the individual categories used that could be problematic or irrelevant
  • There are the images themselves and how they were obtained

The training sets of labeled images that are ubiquitous in contemporary computer vision and AI are built on a foundation of unsubstantiated and unstable epistemological and metaphysical assumptions about the nature of images, labels, categorization, and representation. Furthermore, those epistemological and metaphysical assumptions hark back to historical approaches where people were visually assessed and classified as a tool of oppression and race science.

They point out how many of the image datasets used for face recognition have been trimmed or have disappeared as they got criticized, but they may still be influential as they were downloaded and are circulating in AI labs. These datasets with their assumptions have also been used to train commercial tools.

I particularly like how the authors discuss their work as an archaeology, perhaps in reference to Foucault (though they don’t mention him.)

I would argue that we need an ethics of care and repair to maintain these datasets usefully.

InspiroBot

I am an artificial intelligence dedicated to generating unlimited amounts of unique inspirational quotes for endless enrichment of pointless human existence.

InspiroBot is a web site with an AI bot that produces inspiring quotes and puts them on images, sometimes with hilarious results. You can generate new quotes over and over and the system, while generating them also interacts with you saying things like “You’re my favorite user!” (I wonder if I’m the only one to get this or if the InspiroBot flatters all its users.)

It also has a Mindfulness mode where is just keeps on putting up pretty pictures and playing meditative music while reading out “insprirations.” Very funny as in “Take in how your bodily orifices are part of heaven…”

While the InspiroBot may seem like toy, there is a serious side to this. First, it is powered by an AI that generates plausible inspirations (most of the time.) Second, it shows how a model of how we might use AI as a form of prompt – generating media that provokes us. Third, it shows the deep humour of current AI. Who can take it seriously.

Thanks to Chelsea for this.

What happens when pacifist soldiers search for peace in a war video game

What happens to pacifist soldiers stuck in a war video game? A history of military desertion with the aid of Battlefield V

Aeon has a very interesting 20+ minute short video on What happens when pacifist soldiers search for peace in a war video game. The video looks at how one might desert a war in a war video game. Of course, the games don’t let you, but there are work arounds.

This is the second smart video shot in game by folk associated with Total Refusal a “Digital Disarmament Movement”.

Embracing econferences: a step toward limiting the negative effects of conference culture | University Affairs

How the traditional conference format has been reimagined in the wake of COVID-19 lockdowns.

University Affairs recently published a short article we wrote on Embracing econferences: a step toward limiting the negative effects of conference culture. The article came out of our work on a collection titled Right Research: Modelling Sustainable Research Practices in the Anthropocene

The article talks about the carbon cost of flying and the advantages of econferencing, that we have all learned about in this pandemic. It asks about after the pandemic.

As we move into the post-pandemic future, we find ourselves at a crossroads. Once travel restrictions are lifted, will we return to face-to-face conferences and double-down on travel requirements? Or will we continue to explore more sustainable, virtual alternatives, like econferences?

 

Replaying Japan Journal, Vol. 3

Volume 3 of the Journal of Replaying Japan is out and now available on the  Ritsumeikan Research Repository – Replaying Japan, Vol. 3. I have an article with Keiji Amano and Mimi Okabe on “Ethics and Gaming: The Presentation of Ethics and Social Responsibility by the Japanese Game Industry” where we looked at how top Japanese video game companies present their ethics and social responsibilities. I should add that I’m the English Editor and helped put it together.

Right Research: Modelling Sustainable Research Practices in the Anthropocene – Open Book Publishers

This timely volume responds to an increased demand for environmentally sustainable research, and is outstanding not only in its interdisciplinarity, but its embrace of non-traditional formats, spanning academic articles, creative acts, personal reflections and dialogues.

Open Book Publishers has just published the book I helped edit, Right Research: Modelling Sustainable Research Practices in the Anthropocene. The book gathers essays that came out of the last Around the World Conference that the Kule Institute for Advanced Research ran on Sustainable Research.

The Around the  World econferences we ran were experiments in trying to find a more sustainable way to meet and exchange ideas that involved less flying. It is good to see this book out in print.

Psychology, Misinformation, and the Public Square

Computational propaganda is ubiquitous, researchers say. But the field of psychology aims to help.

Undark has a fascinating article by Teresa Carr about using games to inoculate people against trolling and mininformation, Psychology, Misinformation, and the Public Square (May 3, 2021). The game is Breaking Harmony Square and the idea is to troll a community.

What’s the game like? The game feels like a branching, choose-your-own-adventure under the hood where a manager walks you through what might do or not and then complements you when you are a good troll. There is a ticker so you can see the news about Harmony Square. It feels a bit pedantic when the managerial/editorial voice says things like “Kudos for paying attention to buzzwords. You ignored the stuff that isn’t emotionally manipulative.” Still, the point is to understand what can be done to manipulate a community so that you are inoculated against it.

An important point made by the article is that games, education and other interventions are not enough. Drvier’s education is only part of safe roads. Laws and infrastructure are also important.

I can’t help feeling that we are repeating a pattern of panic and then literacy proposals in the face of new media politics. McLuhan drew our attention to manipulation by media and advertising and I remember well intentioned classes on reading advertising like this more current one. Did they work? Will misinformation literacy work now? Or, is the situation more complex with people like Trump willing to perform convenient untruths?

Whatever the effectiveness of games or literacy training, it is interesting how “truth” has made a comeback. At the very moment when we seem to be witnessing the social and political construction of knowledge, we are hearing calls for truth.

AI Dungeon and the Ethics of Text Generation

> You talk to yourself.

Hi I’m Rameau. You probably don’t know me, but since you’re reading this I’m going to assume you do indeed exist.

> You [ask why] i’m wondering what consciousness is.

I have no idea. But it’s fun to think about.Is this a person? No, because it’s conscious. Is it alive? No, because it doesn’t respond to external stimuli.

> You [ask] why does a person have to respond to external stimuli.

Because then we can interact with it!!!!!But isn’t consciousness an internal phenomenon? Then how would the person interact with anything else? Oh wait, that’s right. We’re not limited by our perception of reality.

I’ve been trying Latitude’s text generation game AI Dungeon. The game is built on Open AI’s GPT-3. You can see a bit of conversation I had with it in a cyberpunk world.

Latitude has gotten into trouble with Open AI because it seems that the game was generating erotic content featuring children. A number of people turned to AI Dungeon precisely because it could be used to explore adult themes, and that would seem to be a good thing, but then some may have gone too far. See the Wired story It Began as an AI-Fueled Dungeon Game. It Got Much Darker. This raises interesting ethical issues about:

  • Why do so many players use it to generate erotic content?
  • Who is responsible for the erotic content? Open AI, Latitude, or the players?
  • Whether there are ethical reasons to generate erotic content featuring children? Do we forbid people from writing novels like Lolita?
  • How to prevent inappropriate content without crippling the AI? Are filters enough?

The problem of AIs generating toxic language is nicely shown by this web page on Evaluating Neural Toxic Degeneration in Language Models. The interactives and graphs on the page let you see how toxic language can be generated by many of the popular language generation AIs. The problem seems to be the data sets used to train the machines like those that include scrapes of Reddit.

This exploratory tool illustrates research reported on in a paper titled RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models. You can see a neat visualization of the connected papers here.

The withering email that got an ethical AI researcher fired at Google

“Stop writing your documents because it doesn’t make a difference”: Timnit Gebru’s final message to her peers

From the Substack newsletter Platformer by Casey Newton, The withering email that got an ethical AI researcher fired at Google. The researcher is Timnit Gebru and the email shows the frustration of someone who feels all the EDI work that they have to do over and above their research is for naught. 

It is worth noting that the Google CEO, Sundar Pichai, has apologized for the handling of the case after pushback from Google workers.

Another CNET story reports that Google scientists reportedly told to make AI look more ‘positive’ in research papers

One wonders if there are any positive stories of companies listening to and respecting their AI ethics researchers.