The Lives of Literary Characters

The goal of this project is to generate knowledge about the behaviour of literary characters at large scale and make this data openly available to the public. Characters are the scaffolding of great storytelling. This Zooniverse project will allow us to crowdsource data to train AI models to better understand who characters are and what they do within diverse narrative worlds to answer one very big question: why do human beings tell stories?

Today we are going live on Zooinverse with our Citizen Science (crowdsourcing) project, The Lives of Literary Characters. The goal of the project is offer micro-tasks that allow volunteers to annotate literary passages that help annotate training data. It will be interesting to see if we get a decent number of volunteers.

Before setting this up we did some serious reading around the ethics of crowdsourcing as we didn’t want to just exploit readers.

 

OpenAI’s GPT store is already being flooded with AI girlfriend bots

OpenAI’s store rules are already being broken, illustrating that regulating GPTs could be hard to control

From Slashdot I learned about a stroy on how OpenAI’s GPT store is already being flooded with AI girlfriend bots. It isn’t particularly surprising that you can get different girlfriend bots. Nor is it surprising that these would be something you can build in ChatGPT-4. ChatGPT is, afterall, a chatbot. What will be interesting to see is whether these chatbot girlfriends are successful. I would have imagined that men would want pornographic girlfriends and that the market for friends would be more for boyfriends along the lines of what Replika offers.

Column: AI investors say they’ll go broke if they have to pay for copyrighted works. Don’t believe it

AI investors say their work is so important that they should be able to trample copyright law on their pathway to riches. Here’s why you shouldn’t believe them.

Michael Hiltzik has a nice colum about how  AI investors say they’ll go broke if they have to pay for copyrighted works. Don’t believe it. He quotes the venture capital firm investing a lot in AI, Adreessen Horowitz as saying,

The only way AI can fulfill its tremendous potential is if the individuals and businesses currently working to develop these technologies are free to do so lawfully and nimbly.

This is like saying that the businesses of the mafia could fulfill their potential if they were allowed to do so lawfully and nimbly. It also assumes there is tremendous potential, and no pernicious side effects to AI. Do we really know there is positive potential and that it is tremendous?

Hiltzik is quite good on the issue of training on copyrighted material, something playing out as we speak. I suspect that if the courts allow the free use of large content platforms for model training that we will then find these collections of content being sequestered behind license walls that prevent their scraping.

How AI Image Generators Make Bias Worse – YouTube

A team at the LIS (London Interdisciplinary School) have created a great short video on the biases of AI image generators. The video covers the issues quickly and is documented with references you can follow for more. I had been looking at how image generators portrayed academics like philosophers, but this reports on research that went much further.

What is also interesting is how this grew out of a LIS undergrad’s first year project. It says something about LIS that they encourage and build on such projects. This got me wondering about the LIS which I had never heard of before. It seems to be a new teaching college in London, UK that is built around interdisciplinary programmes, not departments, that deal with “real-world problems.” It sounds a bit like problem-based learning.

Anyway, it will be interesting to watch how it evolves.

CEO Reminds Everyone His Company Collects Customers’ Sleep Data to Make Zeitgeisty Point About OpenAI Drama

The Eight Sleep pod is a mattress topper with a terms of service and a privacy policy. The company “may share or sell” the sleep data it collects from its users.

From SlashDot a story about how a CEO Reminds Everyone His Company Collects Customers’ Sleep Data to Make Zeitgeisty Point About OpenAI Drama. The story is worrisome because of the data being gathered by a smart mattress company and the use it is being put to. I’m less sure of the CEO’s (Matteo Franceschetti) inferences from his data and his call to “fix this.” How would Eight Sleep fix this? Sell more product?

Huminfra: The Imitation Game: Artificial Intelligence and Dialogue

Today I gave a talk online for an event organized by Huminfra, a Swedish national infrastructure project. The title of the talk was “The Imitation Game: Artificial Intelligence and Dialogue” and it was part of an event online on “Research in the Humanities in the wake of ChatGPT.” I drew on Turing’s name for the Turing Test, the “imitation game.” Here is the abstract,

The release of ChatGPT has provoked an explosion of interest in the conversational opportunities of generative artificial intelligence (AI). In this presentation Dr. Rockwell will look at how dialogue has been presented as a paradigm for thinking machines starting with Alan Turing’s proposal to test machine intelligence with an “imitation game” now known as the Turing Test. In this context Rockwell will show Veliza a tool developed as part of Voyant Tools (voyant-tools.org) that lets you play and script a simple chatbot based on ELIZA which was developed by Joseph Weizenbaum in 1966. ELIZA was one of the first chatbots with which you could have a conversation. It responded as if a psychotherapist, turning whatever you said back into a question. While it was simple, it could be quite entertaining and thus provides a useful way to understanding chatbots.

PARRY encounters the DOCTOR (RFC439)

V. Cerf set up a dialogue between two of the most famous early chatbots, PARRY encounters the DOCTOR (RFC439) The DOCTOR is the therapist script for Weizenbaum’s ELIZA that is how people usually encounter of ELIZA. PARRY was developed by Kenneth Colby and acts like a paranoid schizophrenic. Putting them into dialogue therefore makes a kind of sense and the result is amusing.

It is also interesting that this is a RFC (Request For Comments), a genre normally reserved for Internet technical documents.

The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023

Today and tomorrow representatives from a number of countries have gathered at Bletchley Park to discuss AI safety. Close to 30 countries, including Canada were represented and they issued The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023. This declaration starts with,

Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity. To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible.

The declaration discusses opportunities and the need to support innovation, but also mentions that “AI also poses significant risks” and mentions the usual suspects, especially “capable, general-purpose models” that could be repurposed for misuse.

What stands out is the commitment to international collaboration among the major players, including China. This is a good sign.

Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation. We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe, and supports the good of all through existing international fora and other relevant initiatives, to promote cooperation to address the broad range of risks posed by AI.

Bletchley Park is becoming a UK symbol of computing. It was, of course, where the Allied code-breaking centre was set up. It is where Turing worked on the Colossus, an important early computer used to decode the German ciphers and give the Allies a crucial advantage. It is appropriate that UK Prime Minister Sunak has used this site to gather representatives. Unfortunately few leaders joined him there, sending representatives instead, though Trudeau may show up on the 2nd.

Alas, the Declaration is short on specifics though individual countries like the United States and Canada are securing voluntary commitments from players to abide by codes of conduct. China and the EU are also passing laws regulating artificial intelligence.

One thing not mentioned at all are the dangers of military uses of AI. It is as if warbots are off the table in AI safety discussions.

The good news is that there will be follow up meetings at which we can hope that concrete agreements might be worked out.

 

 

 

A Mirror Test for AI

I recently tried a version of a Mirror Test on ChatGPT 4. I asked it “What is life?” and then for each answer I copied the first sentence or two and put it back into the prompt. I wanted to see if ChatGPT was aware that I was parroting their answers back to them. Alas not … it just kept churning through various answers. The openings of its answers were, in order of appearance:

The question “What is life?” has been pondered by philosophers, scientists, theologians, and thinkers of all disciplines for millennia. …

Indeed, the question of “What is life?” is one of the most profound and enduring inquiries in human history. …

Absolutely. The quest to define and understand life is deeply embedded in the human psyche and is reflected in our art, science, philosophy, and spirituality. …

It didn’t repeat itself, but it didn’t ask me why I was repeating what it said. Obviously it fails the Mirror Test.

 

 

Artificial General Intelligence Is Already Here

Today’s most advanced AI models have many flaws, but decades from now, they will be recognized as the first true examples of artificial general intelligence.

Blaise Agüera Y Arcas and Peter Norvig have an essay making the argument that  Artificial General Intelligence Is Already Here. Their point is that the latest machines like ChatGPT are far more general than previous narrow AIs. They may not be as general as a human, at least without embodiment, but they can do all sorts of textual tasks including tasks not deliberately programmed into them. Some of the ways they are general include their ability to deal with all sorts of topics, their ability to do different types of tasks, their ability to deal with different modalities (images, text …), their language ability, and instructability.

The article also mentions reasons why people are still reluctant to admit that we have a form of AGI:

  • “A healthy skepticism about metrics for AGI

  • An ideological commitment to alternative AI theories or techniques

  • A devotion to human (or biological) exceptionalism

  • A concern about the economic implications of AGI”

To some extent the goal post changes as AI’s solve different challenges. We used to think playing chess well was a sign of intelligence, now that we know how a computer can do it, it no longer seems a test of intelligence.