Huminfra: The Imitation Game: Artificial Intelligence and Dialogue

Today I gave a talk online for an event organized by Huminfra, a Swedish national infrastructure project. The title of the talk was “The Imitation Game: Artificial Intelligence and Dialogue” and it was part of an event online on “Research in the Humanities in the wake of ChatGPT.” I drew on Turing’s name for the Turing Test, the “imitation game.” Here is the abstract,

The release of ChatGPT has provoked an explosion of interest in the conversational opportunities of generative artificial intelligence (AI). In this presentation Dr. Rockwell will look at how dialogue has been presented as a paradigm for thinking machines starting with Alan Turing’s proposal to test machine intelligence with an “imitation game” now known as the Turing Test. In this context Rockwell will show Veliza a tool developed as part of Voyant Tools (voyant-tools.org) that lets you play and script a simple chatbot based on ELIZA which was developed by Joseph Weizenbaum in 1966. ELIZA was one of the first chatbots with which you could have a conversation. It responded as if a psychotherapist, turning whatever you said back into a question. While it was simple, it could be quite entertaining and thus provides a useful way to understanding chatbots.

‘It was as if my father were actually texting me’: grief in the age of AI

People are turning to chatbot impersonations of lost loved ones to help them grieve. Will AI help us live after we’re dead?

The Guardian has a thorough story about the use of AI to evoke the dead, ‘It was as if my father were actually texting me’: grief in the age of AI. The story talks about how one can train an artificial intelligence on past correspondence to mimic someone who passed. One can imagine academic uses of this where we create clones of historical figures with which to converse. Do we have enough David Hume to create an interesting AI agent?

For all the advances in medicine and technology in recent centuries, the finality of death has never been in dispute. But over the past few months, there has been a surge in the number of people sharing their stories of using ChatGPT to help say goodbye to loved ones. They raise serious questions about the rights of the deceased, and what it means to die. Is Henle’s AI mother a version of the real person? Do we have the right to prevent AI from approximating our personalities after we’re gone? If the living feel comforted by the words of an AI bot impersonation – is that person in some way still alive?

The article mentions some of the ethical quandaries:

  • Do dead people have rights? Or do others have rights related to a dead person’s image, voice, and pattern of conversation?
  • Is it healthy to interact with an AI revivification of a close relative?

 

Why scientists are building AI avatars of the dead | WIRED Middle East

Advances in AI and humanoid robotics have brought us to the threshold of a new kind of capability: creating lifelike digital renditions of the deceased.

Wired Magazine has a nice article about Why scientists are building AI avatars of the dead. The article talks about digital twin technology designed to create an avatar of a particular person that could serve as a family companion. You could have your grandfather modelled so that you could talk to him and hear his stories after he has passed.

The article also talks about the importance of the body and ideas about modelling personas with bodies. Imagine wearing motion trackers and other sensors so that your bodily presence could be modelled. Then imagine your digital twin being instantiated in a robot.

Needless to say we aren’t anywhere close yet. See this spoof video of the robot Sophia on a date with Will Smith. There are nonetheless issues about the legalities and ethics of creating bots based on people. What if one didn’t have permission from the original? Is it ethical to create a bot modelled on a historical person? a living person?

We routinely animate other people in novels, dialogue (of the dead), and in conversation. Is impersonating someone so wrong? Should people be able to control their name and likeness under all circumstances?

Then there are the possibilities for the manipulation of a digital twin or through such a twin.

As for the issue of data breaches, digital resurrection opens up a whole new can of worms. “You may share all of your feelings, your intimate details,” Hickok says. “So there’s the prospect of malicious intent—if I had access to your bot and was able to talk to you through it, I could change your attitude about things or nudge you toward certain actions, say things your loved one never would have said.”

 

The Best of Voyager, Part 1

The Digital Antiquarian has posted the first part of a multipart essay on The Best of Voyager, Part 1. The Voyager Company was a pioneer in the development and distribution of interactive CD-ROMs in the 1990s. They published a number of classics like Amanda Stories, Beethoven’s Ninth Symphony CD-ROM, and Poetry in Motion. They also published some hybrid laserdisc/software combinations like The National Gallery of Art.

Unlike the multimedia experiments coming out of university labs, these CD-ROMs were designed to be commercial products and did sell. I remember ordering a number for the University Toronto Computing Services so we could show what multimedia could do. They were some of the first products to show in a compelling way how interactivity could make a difference. Many included interactive audio, like the Beethoven one, others used Quicktime (digital video) for the first time.

All of this was, to some extent, made anachronistic when the web took off and began to incorporate multimedia effectively. Voyager set the scene remediating earlier works (like the short film of A Hard Day’s Night). But CD-ROMs were, in their turn, replaced.

My favourite was The Residents Freak Show. This was a strange 3D-like tour of the music of The Residents that was organized around a freak show motif.

Thanks to Peter for this.

AI Dungeon and the Ethics of Text Generation

> You talk to yourself.

Hi I’m Rameau. You probably don’t know me, but since you’re reading this I’m going to assume you do indeed exist.

> You [ask why] i’m wondering what consciousness is.

I have no idea. But it’s fun to think about.Is this a person? No, because it’s conscious. Is it alive? No, because it doesn’t respond to external stimuli.

> You [ask] why does a person have to respond to external stimuli.

Because then we can interact with it!!!!!But isn’t consciousness an internal phenomenon? Then how would the person interact with anything else? Oh wait, that’s right. We’re not limited by our perception of reality.

I’ve been trying Latitude’s text generation game AI Dungeon. The game is built on Open AI’s GPT-3. You can see a bit of conversation I had with it in a cyberpunk world.

Latitude has gotten into trouble with Open AI because it seems that the game was generating erotic content featuring children. A number of people turned to AI Dungeon precisely because it could be used to explore adult themes, and that would seem to be a good thing, but then some may have gone too far. See the Wired story It Began as an AI-Fueled Dungeon Game. It Got Much Darker. This raises interesting ethical issues about:

  • Why do so many players use it to generate erotic content?
  • Who is responsible for the erotic content? Open AI, Latitude, or the players?
  • Whether there are ethical reasons to generate erotic content featuring children? Do we forbid people from writing novels like Lolita?
  • How to prevent inappropriate content without crippling the AI? Are filters enough?

The problem of AIs generating toxic language is nicely shown by this web page on Evaluating Neural Toxic Degeneration in Language Models. The interactives and graphs on the page let you see how toxic language can be generated by many of the popular language generation AIs. The problem seems to be the data sets used to train the machines like those that include scrapes of Reddit.

This exploratory tool illustrates research reported on in a paper titled RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models. You can see a neat visualization of the connected papers here.

Virtual YouTubers get caught in the middle of a diplomatic spat

It’s relatively easy for those involved in the entertainment industry in Asia to get caught up in geopolitical scuffles, with with social media accelerating and magnifying any faux pas.

From the Japan Times I learned about how some hololive vTubers or Virtual YouTubers g[o]t caught in the middle of a diplomatic spat. The vTuber Kiryu Coco, who is apparently a young (3,500 years young) dragon, showed a visualization that mentioned Taiwan as different from China and therefore ticked off Chinese fans which led to hololive releasing apologies. Young dragons don’t yet know about the One-China policy. To make matters worse the apologies/explanations published in different countries were different which was noticed and that needed further explanation. Such are the dangers of trying to appeal to both the Chinese, Japanese and US markets.

Not knowing much about vTubers I poked around the hololive site. An interesting aspect of the English site is the information in the FAQ about what you can send or not send your favorite talent. Here is their list of things hololive will not accept from fans:

– ALL second hand/used/opened up items that do NOT directly deliver from e-commerce sites such as Amazon (excluding fan letters and message cards)
– Luxury items (individual items which cost more than 30,000 yen)
– Living beings or raw items (including fresh flowers, except flower stands for specified venues and events)
– Items requiring refrigeration
– Handmade items (excluding fan letters and message cards)
– All sorts of stuffed toys, dolls, cushions (no exceptions)
– Currencies (cash, gift cards, coupons, tickets, etc.)
– Cosmetics, perfumes, soap, medicines, etc.
– Dangerous goods (explosives, knives/weapons, drugs, imitation swords, model guns, etc.)
– Clothes, underwear (Scarves, gloves, socks, and blankets are OK)
– Amulets, talismans, charms (items related to religion, politics, or ideological expressions)
– Large items (sizes where the talents would find it impossible to carry home alone)
– Pet supplies
– Items that may violate public order and moral
– Items that may violate laws and regulations
– Additional items (the authorities will perform final confirmation and judgment)

I feel this list is a distant relative of Borges’ taxonomy of animals taken from the fictional Celestial Emporium of Benevolent Knowledge which includes such self-referential animals as “those included in this classification” and “et cetera.”

On a serious note, it is impressive how much these live vTubers can bring in. By some estimates Coco made USD $140,000 in July. The mix of anime characters and live streaming of game playing (see above) and other fun seems to be popular. While this phenomena may look like one of those weird Japan things, I suspect we are going to see more virtual characters especially if face and body tracking tools become easy to use. How could I teach online as a virtual character?

Celebrating Stéfan Sinclair: A Dialogue from 2007

Sadly, last Thursday Stéfan Sinclair passed away. A group of us posted an obituary for CSDH-SCHN here,  Stéfan Sinclair, In Memoriam and boy do I miss him already. While the obituary describes the arc of his career I’ve been trying to think of how to celebrate how he loved to play with ideas and code. The obituary tells the what of his life but doesn’t show the how.

You see, Stéfan loved to toy with ideas of text through the development of software toys. The hermeneuti.ca project started with a one day text analysis vacation/hackathon. We decided to leave all the busy work of being an academic in our offices, and spend a day in the TAPoR lab at McMaster. We decided to mess around and try the analytical equivalent of extreme programming. That included a version of “pair programming” where we alternated one at the keyboard doing the analysis while the other would take notes and direct. We told ourselves we would just devote one day without interruptions to this folly and see if together we could take a project from conception to some sort of finished result in a day.

Little did we know we would still be at play right until a few weeks ago. We failed to finish that day, but we got far enough to know we enjoyed the fooling around enough to do it again and again. Those escapes into what we later called agile hermeneutics, to give it a serious name, eventually led to a monster of a project that reflected back on the play. The project culminated in the jointly authored book Hermeneutica (MIT Press, 2016) and Voyant 2.0, both of which tried to not only think-through some of the potential of the play, but also give others a way of making their own interpretative toys (which we called hermeneutica). But these too are perhaps too serious to commemorate Stéfan’s presence.

Which brings me to the dialogue we wrote and performed on “Reading Tools.” Thanks to Susan I was reminded of this script that we acted out at the University of Illinois, Urbana-Champaign in June of 2007. May it honour how Stéfan would want to be remembered. Imagine him smiling at the front of the room as he starts,

Sinclair: Why do we care so much for the opinions of other humanists? Why do we care so much whether they use computing in the humanities?

Rockwell: Let me tell you an old story. There was once a titan who invented an interpretative technology for his colleagues. No, … he wasn’t chained to a rock to have his liver chewed out daily. … Instead he did the smart thing and brought it to his dean, convinced the technology would free his colleagues from having to interpret texts and let them get back to the real work of thinking.

Sinclair: I imagine his dean told him that in the academy those who develop tools are not the best judges of their inventions and that he had to get his technology reviewed as if it were a book.

Rockwell: Exactly, and the dean said, “And in this instance, you who are the father of a text technology, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners’ souls, because they will not study the old ways; they will trust to the external tools and not interpret for themselves. The technology which you have discovered is an aid not to interpretation, but to online publishing.”

Sinclair: Yes, Geoffrey, you can easily tell jokes about the academy, paraphrasing Socrates, but we aren’t outside the city walls of Athens, but in the middle of Urbana at a conference. We have a problem of audience – we are slavishly trying to please the other – that undigitized humanist – why don’t we build just for ourselves? …

Enjoy the full dialogue here: Reading Tools Script (PDF).

Digital Synergies Launch Event


Today I gave a short talk at the Digital Synergies Launch Event. The launch included neat talks by colleagues including:

I showed and talked about Lexigraphi.ca – The Dictionary of Worlds in the Wild. This is a social site where people can upload pictures of text outside of books and documents and tag the words – text like tatoos, graffiti, store signs and other forms of public textuality.

Formality*

Formality* Screen Shot

Formality* is an interactive in browser art work about filling out forms to apply to “The Neighbourhood”. Formality* was developed in HyperCard by Ewan Atkinson and plays with the retro development environment. Having spent a lot of time on HyperCard I loved Atkinson’s use of the environment – he even has agents that can advise you (reminiscent of Brenda Laurel’s work). Formality* is part of a larger work called The Neighbourhood Project – it makes you wonder about how one becomes part of communities and the processes of applying to belong.

The Body in Question(s)


Isabelle Van Grimde gave the opening talk at Dyscorpia on her work, including projects like The Body in Question(s). In another project Les Gestes, she collaborated with the McGill IDMIL lab who developed digital musical instruments for the dancers to wear and dance/play.

Van Grimde’s company Corps Secrets has the challenge of creating dances that can travel which means that the technologies/instruments have to . They use intergenerational casts (the elderly or children.) They are now working with sensors more than instruments so the dancers are free of equipment.