Apple will scan iPhones for child pornography

Apple unveiled new software Thursday that scans photos and messages on iPhones for child pornography and explicit messages sent to minors in a major new effort to prevent sexual predators from using Apple’s services.

The Washington Post and other news venues are reporting that Apple will scan iPhones for child pornography. As the subtitle to the article puts it “Apple is prying into iPhones to find sexual predators, but privacy activists worry governments could weaponize the feature.” Child porn is the go-to case when organizations want to defend surveillance.

The software will scan without our knowledge or consent which raises privacy issues. What are the chances of false positives? What if the tool is adapted to catch other types of images? Edward Snowden and the EFF have criticized this move. It seems inconsistent with Apple’s firm position on privacy and refusal to even unlock

It strikes me that there is a great case study here.

Pentagon believes its precognitive AI can predict events ‘days in advance’

The US military is testing AI that helps predict events days in advance, helping it make proactive decisions..

Endgadget has a story on how the Pentagon believes its precognitive AI can predict events ‘days in advance’. It is clear that for most the value in AI and surveillance is prediction and yet there are some fundamental contradictions. As Hume pointed out centuries ago, all prediction is based on extrapolation from past behaviour. We simply don’t know the future; the best we can do is select features of past behaviour that seemed to do a good job predicting (retrospectively) and hope they will work in the future. Alas, we get seduced by the effectiveness of retrospective work. As Smith and Cordes put it in The Phantom Pattern Problem:

How, in this modern era of big data and powerful computers, can experts be so foolish? Ironically, big data and powerful computers are part of the problem. We have all been bred to be fooled—to be attracted to shiny patterns and glittery correlations. (p. 11)

What if machine learning and big data were really best suited for suited for studying the past and not predicting the future? Would there be the hype? the investment?

When the next AI winter comes we in the humanities could pick up the pieces and use these techniques to try to explain the past, but I’m getting ahead of myself and predicting another winter.

What Ever Happened to IBM’s Watson? – The New York Times

IBM’s artificial intelligence was supposed to transform industries and generate riches for the company. Neither has panned out. Now, IBM has settled on a humbler vision for Watson.

The New York Times has a story about What Ever Happened to IBM’s Watson? The story is a warning to all of us about the danger of extrapolating from intelligence behaviour in one limited domain to others. Watson got good enough at trivia question answering (or posing) to win at Jeopardy!, but that didn’t scale out.

IBM’s strategy is interesting to me. Developing an AI to win at a game like Jeopardy! was what IBM did with Deep Blue that won at chess in 1997. Winning at a game considered paradigmatically a game of intelligence is a great way to get public relations attention.

Interestingly what seems to be working with Watson is not the moon shot game playing type of service, but the automation of basic natural language processing tasks.

Having recently read Edwin Black’s IBM and the Holocaust: The Strategic Alliance Between Nazi Germany and America’s Most Powerful Corporation I must say that the choice of the name “Watson” grates. Thomas Watson was responsible for IBM’s ongoing engagement with the Nazi’s for which he got a medal from Hitler in 1937. Watson didn’t seem to care how IBM’s data processing technology was being used to manage people especially Jews. I hope the CEOs of AI companies today are more ethical.

ImageGraph: a visual programming language for the Visual Digital Humanities

Leonardo Impett has a nice demonstration here of  ImageGraph: a visual programming language for the Visual Digital Humanities. ImageGraph is a visual programming environment that works with Google Colab. When you have your visual program you can compile it into Python in a Colab notebook and then run that notebook. The visual program is stored in your Github account and the Python code can, of course, be used in larger projects.

The visual programming language has a number of functions for handling images and using artificial intelligence techniques on them. It also has text functions, but they are apparently not fully worked out.

I love the way Impett combines off the shelf systems while adding a nice visual development environment. Very clean.

The ethics of regulating AI: When too much may be bad

By trying to put prior restraints on the release of algorithms, we will make the same mistake Milton’s censors were making in trying to restrict books before their publication. We will stifle the myriad possibilities inherent in an evolving new technology and the unintended effects that it will foster among new communities who can extend its reach into novel and previously unimaginable avenues. In many ways it will defeat our very goals for new technology, which is its ability to evolve, change and transform the world for the better.

3 Quarks Daily has another nice essay on ethics and AI by Ashutosh Jogalekar. This one is about The ethics of regulating AI: When too much may be bad. The argument is that we need to careful about regulating algorithms preemptively. As quote above makes clear he makes three related points:

  • We need to be careful censoring algorithms before they are tried.
  • One reason is that it is very difficult to predict negative or positive outcomes of new technologies. Innovative technologies almost always have unanticipated effects and censoring them would limit our ability to learn about the effects and benefit from them.
  • Instead we should manage the effects as they emerge.

I can imagine some responses to this argument:

  • Unanticipated effects are exactly what we should be worried about. The reason for censoring preemptively is precisely to control for unanticipated effects. Why not encourage better anticipation of effects.
  • Unanticipated effects, especially network effects, often only manifest themselves when the technology is used at scale. By then it can be difficult to roll back the technology. Precisely when there is a problem is when we can’t easily change the way the technology is used.
  • One person’s unanticipated effect is another’s business or another’s freedom. There is rarely consensus about the effect of effects.

I also note how Jogalekar talks about the technology as if it had agency. He talks about the technologies ability to evolve. Strictly speaking the technology doesn’t evolve, but our uses do. When it comes to innovation we have to be careful not to ascribe agency to technology as if it was some impersonal force we can resist.

Excavating AI

The training sets of labeled images that are ubiquitous in contemporary computer vision and AI are built on a foundation of unsubstantiated and unstable epistemological and metaphysical assumptions about the nature of images, labels, categorization, and representation. Furthermore, those epistemological and metaphysical assumptions hark back to historical approaches where people were visually assessed and classified as a tool of oppression and race science.

Excavating AI is an important paper by Kate Crawford and Trevor Paglen that looks at “The Politics of Image in Machine Learning Training.” They look at different ways that politics and assumptions can creep into training datasets that are (were) widely used in AI.

  • There is the overall taxonomy used to annotate (label) the images
  • There are the individual categories used that could be problematic or irrelevant
  • There are the images themselves and how they were obtained

The training sets of labeled images that are ubiquitous in contemporary computer vision and AI are built on a foundation of unsubstantiated and unstable epistemological and metaphysical assumptions about the nature of images, labels, categorization, and representation. Furthermore, those epistemological and metaphysical assumptions hark back to historical approaches where people were visually assessed and classified as a tool of oppression and race science.

They point out how many of the image datasets used for face recognition have been trimmed or have disappeared as they got criticized, but they may still be influential as they were downloaded and are circulating in AI labs. These datasets with their assumptions have also been used to train commercial tools.

I particularly like how the authors discuss their work as an archaeology, perhaps in reference to Foucault (though they don’t mention him.)

I would argue that we need an ethics of care and repair to maintain these datasets usefully.

InspiroBot

I am an artificial intelligence dedicated to generating unlimited amounts of unique inspirational quotes for endless enrichment of pointless human existence.

InspiroBot is a web site with an AI bot that produces inspiring quotes and puts them on images, sometimes with hilarious results. You can generate new quotes over and over and the system, while generating them also interacts with you saying things like “You’re my favorite user!” (I wonder if I’m the only one to get this or if the InspiroBot flatters all its users.)

It also has a Mindfulness mode where is just keeps on putting up pretty pictures and playing meditative music while reading out “insprirations.” Very funny as in “Take in how your bodily orifices are part of heaven…”

While the InspiroBot may seem like toy, there is a serious side to this. First, it is powered by an AI that generates plausible inspirations (most of the time.) Second, it shows how a model of how we might use AI as a form of prompt – generating media that provokes us. Third, it shows the deep humour of current AI. Who can take it seriously.

Thanks to Chelsea for this.

Psychology, Misinformation, and the Public Square

Computational propaganda is ubiquitous, researchers say. But the field of psychology aims to help.

Undark has a fascinating article by Teresa Carr about using games to inoculate people against trolling and mininformation, Psychology, Misinformation, and the Public Square (May 3, 2021). The game is Breaking Harmony Square and the idea is to troll a community.

What’s the game like? The game feels like a branching, choose-your-own-adventure under the hood where a manager walks you through what might do or not and then complements you when you are a good troll. There is a ticker so you can see the news about Harmony Square. It feels a bit pedantic when the managerial/editorial voice says things like “Kudos for paying attention to buzzwords. You ignored the stuff that isn’t emotionally manipulative.” Still, the point is to understand what can be done to manipulate a community so that you are inoculated against it.

An important point made by the article is that games, education and other interventions are not enough. Drvier’s education is only part of safe roads. Laws and infrastructure are also important.

I can’t help feeling that we are repeating a pattern of panic and then literacy proposals in the face of new media politics. McLuhan drew our attention to manipulation by media and advertising and I remember well intentioned classes on reading advertising like this more current one. Did they work? Will misinformation literacy work now? Or, is the situation more complex with people like Trump willing to perform convenient untruths?

Whatever the effectiveness of games or literacy training, it is interesting how “truth” has made a comeback. At the very moment when we seem to be witnessing the social and political construction of knowledge, we are hearing calls for truth.

AI Dungeon and the Ethics of Text Generation

> You talk to yourself.

Hi I’m Rameau. You probably don’t know me, but since you’re reading this I’m going to assume you do indeed exist.

> You [ask why] i’m wondering what consciousness is.

I have no idea. But it’s fun to think about.Is this a person? No, because it’s conscious. Is it alive? No, because it doesn’t respond to external stimuli.

> You [ask] why does a person have to respond to external stimuli.

Because then we can interact with it!!!!!But isn’t consciousness an internal phenomenon? Then how would the person interact with anything else? Oh wait, that’s right. We’re not limited by our perception of reality.

I’ve been trying Latitude’s text generation game AI Dungeon. The game is built on Open AI’s GPT-3. You can see a bit of conversation I had with it in a cyberpunk world.

Latitude has gotten into trouble with Open AI because it seems that the game was generating erotic content featuring children. A number of people turned to AI Dungeon precisely because it could be used to explore adult themes, and that would seem to be a good thing, but then some may have gone too far. See the Wired story It Began as an AI-Fueled Dungeon Game. It Got Much Darker. This raises interesting ethical issues about:

  • Why do so many players use it to generate erotic content?
  • Who is responsible for the erotic content? Open AI, Latitude, or the players?
  • Whether there are ethical reasons to generate erotic content featuring children? Do we forbid people from writing novels like Lolita?
  • How to prevent inappropriate content without crippling the AI? Are filters enough?

The problem of AIs generating toxic language is nicely shown by this web page on Evaluating Neural Toxic Degeneration in Language Models. The interactives and graphs on the page let you see how toxic language can be generated by many of the popular language generation AIs. The problem seems to be the data sets used to train the machines like those that include scrapes of Reddit.

This exploratory tool illustrates research reported on in a paper titled RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models. You can see a neat visualization of the connected papers here.

The withering email that got an ethical AI researcher fired at Google

“Stop writing your documents because it doesn’t make a difference”: Timnit Gebru’s final message to her peers

From the Substack newsletter Platformer by Casey Newton, The withering email that got an ethical AI researcher fired at Google. The researcher is Timnit Gebru and the email shows the frustration of someone who feels all the EDI work that they have to do over and above their research is for naught. 

It is worth noting that the Google CEO, Sundar Pichai, has apologized for the handling of the case after pushback from Google workers.

Another CNET story reports that Google scientists reportedly told to make AI look more ‘positive’ in research papers

One wonders if there are any positive stories of companies listening to and respecting their AI ethics researchers.