AI Dungeon

AI Dungeon, an infinitely generated text adventure powered by deep learning.

Robert told me about AI Dungeon, a text adventure system that uses GPT-2, a language model from OpenAI that got a lot of attention when it was “released” in 2019. OpenAI felt it was too good to release openly as it could be misused. Instead they released a toy version. Now they have GPT-3, about which I wrote before.

AI Dungeon allows you to choose the type of world you want to play in (fantasy, zombies …). It then generates an infinite game by basically generating responses to your input. I assume there is some memory as it repeats my name and the basic setting.

Replaying Japan 2020

Replaying Japan is an international conference dedicated to the study of Japanese video games. For the first time this year, the conference is held online and will combine various types of research contents (videos, texts, livestreams) on the theme of esport and competitive gaming in Japan.

This year the Replaying Japan conference was held online. The conference was originally going to be in Liège, Belgium at the Liège Game Lab. We were going to get to try Belgian fries and beer and learn more about the Game Lab. Alas, with the pandemic, the organizers had to pivot and organize an online conference. They did a great job using technologies like Twitch and Minecraft.

Keiji Amano, Tsugumi (Mimi) Okabe, and I had a paper on Ethics and Gaming: A Content Analysis of Annual Reports of the Japanese Game Industry presented by Prof. Amano. (To read the longer conferencer paper you need to have access to the conference materials, but they will be opening that up.) We looked at how major Japanese game companies frame ethical or CSR (corporate social responsibility) issues which is not how ethics is being discussed in the academy.

The two keynotes were both excellent in different ways. Florent Georges talked about First Steps of Japanese ESports. His talk introduced a number of important early video game competitions. 

Susana Tosca gave the closing keynote. She presented a nuanced and fascinating talk on Mediating the Promised Gameland (see video). She looked at how game tourists visit Japan and interviewed people about this phenomenon of content tourism. This was wrapped in reflections on methodology and tourism. Very interesting, though it raised some ethical issues about how we watch tourists. She was sensitive to the way that ethnographers are tourists of a sort and we need to be careful not to mock our subjects as we watch them. As someone who loves to travel and is therefore often a tourist, I’m probably sensitive on this issue.

Sean Gouglas Remembers Stéfan Sinclair

Sean Gouglas shared these memories of Stéfan Sinclair with me and asked me to post them. They are from when they started the Humanities Computing programme at the University of Alberta where I am lucky to now teach.

In the summer of 2001, two newly-minted PhDs started planning how they were going to build and then teach a new graduate program in Humanities Computing at the University of Alberta. This was the first such program in North America. To be absolutely honest, Stéfan Sinclair and I really had no idea what we were doing. The next few months were both exhausting and exhilarating. Working with Stéfan was a professional and personal treat, especially considering that he had an almost infinite capacity for hard work. I remember him coding up the first Humanities Computing website in about seven minutes — the first HuCo logo appearing like a rising sun on a dark blue background. It also had an unfortunate typo that neither of us noticed for years. 

It was an inspiration to work with Stéfan. He was kind and patient with students, demanding a lot from them but giving even more back. He promoted the program passionately at every conference, workshop, and seminar. Over the next three years, there was a lot of coffee, a lot of spicy food, a beer or two, some volleyball, some squash, and then he and Stephanie were off to McMaster for their next adventure. 

Our Digital Humanities program has changed a lot since then — new courses, new programs, new faculty, and even a new name. Through that change, the soul of the program remained the same and it was shaped and molded by the vision and hard work of Stéfan Sinclair. 

On the 6th of August, Stéfan died of cancer. The Canadian Society for Digital Humanities has a lovely tribute, which can be found here: https://csdh-schn.org/stefan-sinclair-in-memoriam/. It was written in part by Geoffrey Rockwell, who worked closely with Stéfan for more than two decades. 

Celebrating Stéfan Sinclair: A Dialogue from 2007

Sadly, last Thursday Stéfan Sinclair passed away. A group of us posted an obituary for CSDH-SCHN here,  Stéfan Sinclair, In Memoriam and boy do I miss him already. While the obituary describes the arc of his career I’ve been trying to think of how to celebrate how he loved to play with ideas and code. The obituary tells the what of his life but doesn’t show the how.

You see, Stéfan loved to toy with ideas of text through the development of software toys. The hermeneuti.ca project started with a one day text analysis vacation/hackathon. We decided to leave all the busy work of being an academic in our offices, and spend a day in the TAPoR lab at McMaster. We decided to mess around and try the analytical equivalent of extreme programming. That included a version of “pair programming” where we alternated one at the keyboard doing the analysis while the other would take notes and direct. We told ourselves we would just devote one day without interruptions to this folly and see if together we could take a project from conception to some sort of finished result in a day.

Little did we know we would still be at play right until a few weeks ago. We failed to finish that day, but we got far enough to know we enjoyed the fooling around enough to do it again and again. Those escapes into what we later called agile hermeneutics, to give it a serious name, eventually led to a monster of a project that reflected back on the play. The project culminated in the jointly authored book Hermeneutica (MIT Press, 2016) and Voyant 2.0, both of which tried to not only think-through some of the potential of the play, but also give others a way of making their own interpretative toys (which we called hermeneutica). But these too are perhaps too serious to commemorate Stéfan’s presence.

Which brings me to the dialogue we wrote and performed on “Reading Tools.” Thanks to Susan I was reminded of this script that we acted out at the University of Illinois, Urbana-Champaign in June of 2007. May it honour how Stéfan would want to be remembered. Imagine him smiling at the front of the room as he starts,

Sinclair: Why do we care so much for the opinions of other humanists? Why do we care so much whether they use computing in the humanities?

Rockwell: Let me tell you an old story. There was once a titan who invented an interpretative technology for his colleagues. No, … he wasn’t chained to a rock to have his liver chewed out daily. … Instead he did the smart thing and brought it to his dean, convinced the technology would free his colleagues from having to interpret texts and let them get back to the real work of thinking.

Sinclair: I imagine his dean told him that in the academy those who develop tools are not the best judges of their inventions and that he had to get his technology reviewed as if it were a book.

Rockwell: Exactly, and the dean said, “And in this instance, you who are the father of a text technology, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners’ souls, because they will not study the old ways; they will trust to the external tools and not interpret for themselves. The technology which you have discovered is an aid not to interpretation, but to online publishing.”

Sinclair: Yes, Geoffrey, you can easily tell jokes about the academy, paraphrasing Socrates, but we aren’t outside the city walls of Athens, but in the middle of Urbana at a conference. We have a problem of audience – we are slavishly trying to please the other – that undigitized humanist – why don’t we build just for ourselves? …

Enjoy the full dialogue here: Reading Tools Script (PDF).

Leaving Humanist

I just read Dr. Bethan Tovey-Walsh’s post on her blog about why she is Leaving Humanist and it raises important issues. Willard McCarty, the moderator of Humanist, a discussion list going since 1987, allowed the posting of a dubious note that made claims about anti-white racism and then refused to publish rebuttals for fear that an argument would erupt. We know about this thanks to Twitter, where Tovey-Walsh tweeted about it. I should add that her reasoning is balanced and avoids calling names. Specifically she argued that,

If Gabriel’s post is allowed to sit unchallenged, this both suggests that such content is acceptable for Humanist, and leaves list members thinking that nobody else wished to speak against it. There are, no doubt, many list members who would not feel confident in challenging a senior academic, and some of those will be people of colour; it would be immoral to leave them with the impression that nobody cares to stand up on their behalf.

I think Willard needs to make some sort of public statement or the list risks being seen as a place where potentially racist ideas go uncommented.

August 11 Update: Willard McCarty has apologized and published some of the correspondence he received, including something from Tovey-Walsh. He ends with a proposal that he not stand in the way of the concerns voiced about racism, but he proposes a condition to expanded dialogue.

I strongly suggest one condition to this expanded scope, apart from care always to respect those with whom we disagree. That condition is relevance to digital humanities as a subject of enquiry. The connection between subject and society is, to paraphrase Kathy Harris (below), that algorithms are not pure, timelessly ideal, culturally neutral expressions but are as we are.

OSS advise on how to sabotage organizations or conferences

On Twitter someone posted a link to a 1944 OSS Simple Sabotage Field Manual. This includes simple, but brilliant advice on how to sabotage organizations or conferences.

This sounds a lot like what we all do when we academics normally do as a matter of principle. I particularly like the advice to “Make ‘speeches.'” I imagine many will see themselves in their less cooperative moments in this list of actions or their committee meetings.

The OSS (Office of Strategic Services) was the US office that turned into the CIA.

The Man Behind Trump’s Facebook Juggernaut

Brad Parscale used social media to sway the 2016 election. He’s poised to do it again.

I just finished reading important reporting about The Man Behind Trump’s Facebook Juggernaut in the March 9th, 2020 issue of the New Yorker. The long article suggests that it wasn’t Cambridge Analytica or the Russians who swung the 2016 election. If anything had an impact it was the extensive use of social media, especially Facebook, by the Trump digital campaign under the leadership of Brad Parscale. The Clinton campaign focused on TV spots and believed they were going to win. The Trump campaign gathered lots of data, constantly tried new things, and drew on their Facebook “embed” to improve their game.

If each variation is counted as a distinct ad, then the Trump campaign, all told, ran 5.9 million Facebook ads. The Clinton campaign ran sixty-six thousand. “The Hillary campaign thought they had it in the bag, so they tried to play it safe, which meant not doing much that was new or unorthodox, especially online,” a progressive digital strategist told me. “Trump’s people knew they didn’t have it in the bag, and they never gave a shit about being safe anyway.” (p. 49)

One interesting service Facebook offered was “Lookalike Audiences” where you could upload a spotty list of information about people and Facebook would first fill it out from their data and then find you more people who are similar. This lets you expand your list of people to microtarget (and Facebook gets you paying for more targeted ads.)

The end of the article gets depressing as it recounts how little the Democrats are doing to counter or match the social media campaign for Trump which was essentially underway right after the 2016 election. One worries, by the end, that we will see a repeat.

Marantz, Andrew. (2020, March 9). “#WINNING: Brad Parscale used social media to sway the 2016 election. He’s posed to do it again.” New Yorker. Pages 44-55.

Philosophers On GPT-3

GPT-3 raises many philosophical questions. Some are ethical. Should we develop and deploy GPT-3, given that it has many biases from its training, it may displace human workers, it can be used for deception, and it could lead to AGI? I’ll focus on some issues in the philosophy of mind. Is GPT-3 really intelligent, and in what sense? Is it conscious? Is it an agent? Does it understand?

On the Daily Nous (news by and for philosophers) there is a great collection of short essays on OpenAI‘s recently released API to GPT-3, see Philosophers On GPT-3 (updated with replies by GPT-3). And … there is a response from GPT-3. Some of the issues raised include:

Ethics: David Chalmers raises the inevitable ethics issues. Remember that GPT-2 was considered so good as to be dangerous. I don’t know if it is brilliant marketing or genuine concern, but OpenAI is continuing to treat this technology as something to be careful about. Here is Chalmers on ethics,

GPT-3 raises many philosophical questions. Some are ethical. Should we develop and deploy GPT-3, given that it has many biases from its training, it may displace human workers, it can be used for deception, and it could lead to AGI? I’ll focus on some issues in the philosophy of mind. Is GPT-3 really intelligent, and in what sense? Is it conscious? Is it an agent? Does it understand?

Annette Zimmerman in her essay makes an important point about the larger justice context of tools like GPT-3. It is not just a matter of ironing out the biases in the language generated (or used in training.) It is not a matter of finding a techno-fix that makes bias go away. It is about care.

Not all uses of AI, of course, are inherently objectionable, or automatically unjust—the point is simply that much like we can do things with words, we can do things with algorithms and machine learning models. This is not purely a tangibly material distributive justice concern: especially in the context of language models like GPT-3, paying attention to other facets of injustice—relational, communicative, representational, ontological—is essential.

She also makes an important and deep point that any AI application will have to make use of concepts from the application domain and all of these concepts will be contested. There are no simple concepts just as there are no concepts that don’t change over time.

Finally, Shannon Vallor has an essay that revisits Hubert Dreyfus’s critique of AI as not really understanding.

Understanding is beyond GPT-3’s reach because understanding cannot occur in an isolated behavior, no matter how clever. Understanding is not an act but a labor.

Gekiga’s new frontier: the uneasy rise of Yoshiharu Tsuge

Cover of The Swamp

In honour of Drawn & Quarterly‘s publication of Yoshiharu Tsuge’s The Swamp, Boing Boing has published an essay on Tsuge by Mitsuhiro Asakawa, titled Gekiga’s new frontier: the uneasy rise of Yoshiharu Tsuge. The essay sketches Tsuge’s rise as an early original manga artist and it explains his importance. Now Montreal-based Drawn & Quarterly is publishing a series of seven translations by Ryan Holmberg of Tsuge’s work. (Holmberg also translated the essay by Asakawa.) Asakawa is also apparently important to the series being published.

Mitsuhiro Asakawa finally convinced Tsuge and his son to let the work be translated into English. Mitsuhiro is the unsung hero of Japanese comics translation. He’s the guy who has written the most about the Garo era, he’s the go-to guy to connect with these great authors and their families. Most of the collections D+Q have done wouldn’t exist without his help.

(From the Drawn & Quarterly blog post here.)

One of the things I discovered reading Asakawa is that Tsuge worked with/for Shigeru Mizuki, my favourite manga artist, when he was going through a rough patch.

In the realm of paper tigers – exploring the failings of AI ethics guidelines

But even the ethical guidelines of the world’s largest professional association of engineers, IEEE, largely fail to prove effective as large technology companies such as Facebook, Google and Twitter do not implement them, notwithstanding the fact that many of their engineers and developers are IEEE members.

AlgorithmWatch is maintaining an inventory of frameworks and principles. Their evaluation is that these are not making much of a difference. See In the realm of paper tigers – exploring the failings of AI ethics guidelines. They also note there are few from the Global South. It seems to be mostly countries that have an AI industry where principles are being published.