Ask Delphi

Delphi Screen Shot

Ask Delphi is an intriguing AI that you can use to ponder ethical questions. You type in a situation and it will tell you if it is morally acceptable or not. It is apparently built not on Reddit data, but on crowdsourced data, so it shouldn’t be as easy to provoke into giving toxic answers.

In their paper, Delphi: Towards Machine Ethics and Norms they say that they have created a Commonsense Norm Bank, “a collection of 1.7M ethical judgments on diverse real-life situations.” This contributes to Delphi’s sound pronouncements, but it doesn’t seem available for others yet.

AI Weirdness has a nice story on how she fooled Delphi.

Emojify: Scientists create online games to show risks of AI emotion recognition

Public can try pulling faces to trick the technology, while critics highlight human rights concerns

From the Guardian story, Scientists create online games to show risks of AI emotion recognition, I discovered Emojify, a web site with some games to show how problematic emotion detection is. Researchers are worried by the booming business of emotion detection with artificial intelligence. For example, it is being used in education in China. See the CNN story about how In Hong Kong, this AI reads children’s emotions as they learn.

A Hong Kong company has developed facial expression-reading AI that monitors students’ emotions as they study. With many children currently learning from home, they say the technology could make the virtual classroom even better than the real thing.

With cameras all over, this should worry us. We are not only be identified by face recognition, but now they want to know our inner emotions too. What sort of theory of emotions licenses these systems?

Why people believe Covid conspiracy theories: could folklore hold the answer?

Using Danish witchcraft folklore as a model, the researchers from UCLA and Berkeley analysed thousands of social media posts with an artificial intelligence tool and extracted the key people, things and relationships.

The Guardian has a nice story on Why people believe Covid conspiracy theories: could folklore hold the answer? This reports on research using folklore theory and artificial intelligence to understand conspiracies.

The story maps how Bill Gates connects the coronavirus with 5G for conspiracy fans. They use folklore theory to understand the way conspiracies work.

Folklore isn’t just a model for the AI. Tangherlini, whose specialism is Danish folklore, is interested in how conspiratorial witchcraft folklore took hold in the 16th and 17th centuries and what lessons it has for today.

Whereas in the past, witches were accused of using herbs to create potions that caused miscarriages, today we see stories that Gates is using coronavirus vaccinations to sterilise people. …

The research also hints at a way of breaking through conspiracy theory logic, offering a glimmer of hope as increasing numbers of people get drawn in.

The story then addresses the question of what difference the research might make. What good would a folklore map of a conspiracy theory do? The challenge of research is the more information clearly doesn’t work in a world of information overload.

The paper the story is based on is Conspiracy in the time of corona: automatic detection of emerging Covid-19 conspiracy theories in social media and the news, by Shadi Shahsavari, Pavan Holur, Tianyi Wang , Timothy R Tangherlini and Vwani Roychowdhury.

Apple will scan iPhones for child pornography

Apple unveiled new software Thursday that scans photos and messages on iPhones for child pornography and explicit messages sent to minors in a major new effort to prevent sexual predators from using Apple’s services.

The Washington Post and other news venues are reporting that Apple will scan iPhones for child pornography. As the subtitle to the article puts it “Apple is prying into iPhones to find sexual predators, but privacy activists worry governments could weaponize the feature.” Child porn is the go-to case when organizations want to defend surveillance.

The software will scan without our knowledge or consent which raises privacy issues. What are the chances of false positives? What if the tool is adapted to catch other types of images? Edward Snowden and the EFF have criticized this move. It seems inconsistent with Apple’s firm position on privacy and refusal to even unlock

It strikes me that there is a great case study here.

Pentagon believes its precognitive AI can predict events ‘days in advance’

The US military is testing AI that helps predict events days in advance, helping it make proactive decisions..

Endgadget has a story on how the Pentagon believes its precognitive AI can predict events ‘days in advance’. It is clear that for most the value in AI and surveillance is prediction and yet there are some fundamental contradictions. As Hume pointed out centuries ago, all prediction is based on extrapolation from past behaviour. We simply don’t know the future; the best we can do is select features of past behaviour that seemed to do a good job predicting (retrospectively) and hope they will work in the future. Alas, we get seduced by the effectiveness of retrospective work. As Smith and Cordes put it in The Phantom Pattern Problem:

How, in this modern era of big data and powerful computers, can experts be so foolish? Ironically, big data and powerful computers are part of the problem. We have all been bred to be fooled—to be attracted to shiny patterns and glittery correlations. (p. 11)

What if machine learning and big data were really best suited for suited for studying the past and not predicting the future? Would there be the hype? the investment?

When the next AI winter comes we in the humanities could pick up the pieces and use these techniques to try to explain the past, but I’m getting ahead of myself and predicting another winter.

What Ever Happened to IBM’s Watson? – The New York Times

IBM’s artificial intelligence was supposed to transform industries and generate riches for the company. Neither has panned out. Now, IBM has settled on a humbler vision for Watson.

The New York Times has a story about What Ever Happened to IBM’s Watson? The story is a warning to all of us about the danger of extrapolating from intelligence behaviour in one limited domain to others. Watson got good enough at trivia question answering (or posing) to win at Jeopardy!, but that didn’t scale out.

IBM’s strategy is interesting to me. Developing an AI to win at a game like Jeopardy! was what IBM did with Deep Blue that won at chess in 1997. Winning at a game considered paradigmatically a game of intelligence is a great way to get public relations attention.

Interestingly what seems to be working with Watson is not the moon shot game playing type of service, but the automation of basic natural language processing tasks.

Having recently read Edwin Black’s IBM and the Holocaust: The Strategic Alliance Between Nazi Germany and America’s Most Powerful Corporation I must say that the choice of the name “Watson” grates. Thomas Watson was responsible for IBM’s ongoing engagement with the Nazi’s for which he got a medal from Hitler in 1937. Watson didn’t seem to care how IBM’s data processing technology was being used to manage people especially Jews. I hope the CEOs of AI companies today are more ethical.

ImageGraph: a visual programming language for the Visual Digital Humanities

Leonardo Impett has a nice demonstration here of  ImageGraph: a visual programming language for the Visual Digital Humanities. ImageGraph is a visual programming environment that works with Google Colab. When you have your visual program you can compile it into Python in a Colab notebook and then run that notebook. The visual program is stored in your Github account and the Python code can, of course, be used in larger projects.

The visual programming language has a number of functions for handling images and using artificial intelligence techniques on them. It also has text functions, but they are apparently not fully worked out.

I love the way Impett combines off the shelf systems while adding a nice visual development environment. Very clean.

The ethics of regulating AI: When too much may be bad

By trying to put prior restraints on the release of algorithms, we will make the same mistake Milton’s censors were making in trying to restrict books before their publication. We will stifle the myriad possibilities inherent in an evolving new technology and the unintended effects that it will foster among new communities who can extend its reach into novel and previously unimaginable avenues. In many ways it will defeat our very goals for new technology, which is its ability to evolve, change and transform the world for the better.

3 Quarks Daily has another nice essay on ethics and AI by Ashutosh Jogalekar. This one is about The ethics of regulating AI: When too much may be bad. The argument is that we need to careful about regulating algorithms preemptively. As quote above makes clear he makes three related points:

  • We need to be careful censoring algorithms before they are tried.
  • One reason is that it is very difficult to predict negative or positive outcomes of new technologies. Innovative technologies almost always have unanticipated effects and censoring them would limit our ability to learn about the effects and benefit from them.
  • Instead we should manage the effects as they emerge.

I can imagine some responses to this argument:

  • Unanticipated effects are exactly what we should be worried about. The reason for censoring preemptively is precisely to control for unanticipated effects. Why not encourage better anticipation of effects.
  • Unanticipated effects, especially network effects, often only manifest themselves when the technology is used at scale. By then it can be difficult to roll back the technology. Precisely when there is a problem is when we can’t easily change the way the technology is used.
  • One person’s unanticipated effect is another’s business or another’s freedom. There is rarely consensus about the effect of effects.

I also note how Jogalekar talks about the technology as if it had agency. He talks about the technologies ability to evolve. Strictly speaking the technology doesn’t evolve, but our uses do. When it comes to innovation we have to be careful not to ascribe agency to technology as if it was some impersonal force we can resist.

Excavating AI

The training sets of labeled images that are ubiquitous in contemporary computer vision and AI are built on a foundation of unsubstantiated and unstable epistemological and metaphysical assumptions about the nature of images, labels, categorization, and representation. Furthermore, those epistemological and metaphysical assumptions hark back to historical approaches where people were visually assessed and classified as a tool of oppression and race science.

Excavating AI is an important paper by Kate Crawford and Trevor Paglen that looks at “The Politics of Image in Machine Learning Training.” They look at different ways that politics and assumptions can creep into training datasets that are (were) widely used in AI.

  • There is the overall taxonomy used to annotate (label) the images
  • There are the individual categories used that could be problematic or irrelevant
  • There are the images themselves and how they were obtained

The training sets of labeled images that are ubiquitous in contemporary computer vision and AI are built on a foundation of unsubstantiated and unstable epistemological and metaphysical assumptions about the nature of images, labels, categorization, and representation. Furthermore, those epistemological and metaphysical assumptions hark back to historical approaches where people were visually assessed and classified as a tool of oppression and race science.

They point out how many of the image datasets used for face recognition have been trimmed or have disappeared as they got criticized, but they may still be influential as they were downloaded and are circulating in AI labs. These datasets with their assumptions have also been used to train commercial tools.

I particularly like how the authors discuss their work as an archaeology, perhaps in reference to Foucault (though they don’t mention him.)

I would argue that we need an ethics of care and repair to maintain these datasets usefully.

InspiroBot

I am an artificial intelligence dedicated to generating unlimited amounts of unique inspirational quotes for endless enrichment of pointless human existence.

InspiroBot is a web site with an AI bot that produces inspiring quotes and puts them on images, sometimes with hilarious results. You can generate new quotes over and over and the system, while generating them also interacts with you saying things like “You’re my favorite user!” (I wonder if I’m the only one to get this or if the InspiroBot flatters all its users.)

It also has a Mindfulness mode where is just keeps on putting up pretty pictures and playing meditative music while reading out “insprirations.” Very funny as in “Take in how your bodily orifices are part of heaven…”

While the InspiroBot may seem like toy, there is a serious side to this. First, it is powered by an AI that generates plausible inspirations (most of the time.) Second, it shows how a model of how we might use AI as a form of prompt – generating media that provokes us. Third, it shows the deep humour of current AI. Who can take it seriously.

Thanks to Chelsea for this.