Last week I gave the 2023 Annual Public Lecture in Philosophy. You can Watch a Recording here. The talk was on The Eliza Effect: Data Ethics for Machine Learning.
I started the talk with the case of Kevin Roose’s interaction with Sydney (Microsoft’s name for Bing Chat) where it ended up telling Roose that it loved him. From there I discussed some of the reasons we should be concerned with the latest generation of chatbots. I then looked at the ethics of LAION-5B as an example of how we can audit the ethics of projects. I ended with some reflections on what an ethics of AI could be.
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.
This letter to AI labs follows a number of essays and opinions that maybe we are going too fast and should show restraint. This in the face of the explosive interest in large language models after ChatGPT.
Gary Marcus wrote an essay in his substack on “AI risk ≠ AGI risk” arguing that just because we don’t have AGI doesn’t mean there isn’t risk associated with the Mediocre AI systems we do have.
We have summoned an alien intelligence. We don’t know much about it, except that it is extremely powerful and offers us bedazzling gifts but could also hack the foundations of our civilization. We call upon world leaders to respond to this moment at the level of challenge it presents. The first step is to buy time to upgrade our 19th-century institutions for a post-A.I. world and to learn to master A.I. before it masters us.
On Making in the Digital Humanities fills a gap in our understanding of digital humanities projects and craft by exploring the processes of making as much as the products that arise from it. The volume draws focus to the interwoven layers of human and technological textures that constitute digital humanities scholarship.
On Making in the Digital Humanities is finally out from UCL Press. The book honours the work of John Bradley and those in the digital humanities who share their scholarship through projects. Stéfan Sinclair and I first started work on it years ago and were soon joined by Juliane Nyhan and later Alexandra Ortolja-Baird. It is a pleasure to see it finished.
I co-wrote the Introduction with Nyhan and wrote a final chapter on “If Voyant then Spyral: Remembering Stéfan Sinclair: A discourse on practice in the digital humanities.” Stéfan passed during the editing of this.
Yesterday I was part of a signing ceremony for a Memorandum of Agreement between Ritsumeikan University and the University of Alberta. I and the President of the University of Alberta (Bill Flanagan) signed on behalf of U of A. The MOU described our desire to build on our collaborations around Replaying Japan. We hope to build collaborations around artificial intelligence, games, learning, and digital humanities. KIAS and the AI4Society signature area have been supporting this research collaboration.
Today (March 2nd, 2023) we are having a short conference at Ritsumeikan that included a panel about our collaboration, at which I talked, and a showcase of research in game studies at Ritsumeikan.
For the first time, A.I.-generated personas, often used for corporate trainings, were detected in a state-aligned information campaign — opening a new chapter in online manipulation.
The New York Times has a story about how How Deepfake Videos Are Used to Spread Disinformation. The videos are actually from a service Synthesia which allows you to generate videos of talking heads from transcripts that you prepare. They have different professionally acted avatars and their technology will then generate the video of your text being presented. This is supposed to be used for quickly generating training videos (without paying actors), but someone used it for disinformation.
With the success of OpenAI’s ChatGPT the other big AI companies are teasing us with their AIs. For example, Verge tells us that Google’s new AI turns text into music. Alas Google doesn’t seem to want to let us play with the AI, just admire the results.
D&D is a game for people who like rules: in order to play even the basic game, you had to make sense of roughly twenty pages of instructions, which cover everything from “Adjusting Ability Scores” (“Magic-users and clerics can reduce their strength scores by 3 points and add 1 to their prime requisite”) to “Who Gets the First Blow?” (“The character with the highest dexterity strikes first”). In fact, as I wandered farther into the cave, and acquired the rulebooks for Advanced Dungeons & Dragons, I found that there were rules for everything: … It would be a mistake to think of these rules as an impediment to enjoying the game. Rather, the rules are a necessary condition for enjoying the game, and this is true whether you play by them or not. The rules induct you into the world of D&D; they are the long, difficult scramble from the mouth of the cave to the first point where you can stand up and look around.
It seems to me that this is related to the various activities that were staged in Second Life and other environments. It also has connections to the Machinima phenomenon where people use 3D environments like games to stage acts that are filmed.
Of course the problem with Fallout 76 is the performers can get attacked during a performance.