Replaying Japan 2023

Replaying Japan 2023  – The 11th International Japan Game Studies Conference – Conference Theme – Local Communities, Digital Communities and Video Games in Japan

I’m back in Canada after Replaying Japan 2023 in Nagoya Japan. I kept conference notes here for those interested. The book of abstracts is here and the programme is here. Next year will be in August at the University of Buffalo and the Strong Museum in Rochester. Some points of interest:

  • Nökkvi Jarl Bjarnason gave a talk on the emergence of national and regional game studies. What does it mean to study game culture in a country or region? How is locality appealed to in game media or games or other aspects of game culture?
  • Felania Liu presented on game preservation in China and the challenges her team faces including issues around the legitimacy of game studies.
  • Hirokazu Hamamura gave the final keynote on the evolution of game media starting with magazines and then shifting to the web.
  • I presented a paper co-written with Miki Okabe and Keiji Amano. We started with the demographic challenges faced by Japan as its population shrinks. We then looked at what Japanese Game Companies are doing to attract and support women and families. There is a work ethics that puts men and women in a bind where they are expected to work such long hours that there really isn’t any time left for “work-life balance.”

The conference was held in person at Nagoya Zokei University and brilliantly organized by Keiji Amano and Jean-Marc Pelletier. We limited online interventions to short lightning talks so there was good attendance.

Signing of MOU

See https://twitter.com/PTJCUA1/status/1630853467605721089

Yesterday I was part of a signing ceremony for a Memorandum of Agreement between Ritsumeikan University and the University of Alberta. I and the President of the University of Alberta (Bill Flanagan) signed on behalf of U of A. The MOU described our desire to build on our collaborations around Replaying Japan. We hope to build collaborations around artificial intelligence, games, learning, and digital humanities. KIAS and the AI4Society signature area have been supporting this research collaboration.

Today (March 2nd, 2023) we are having a short conference at Ritsumeikan that included a panel about our collaboration, at which I talked, and a showcase of research in game studies at Ritsumeikan.

Google engineer Blake Lemoine thinks its LaMDA AI has come to life

The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder.

The Washington Post reports that Google engineer Blake Lemoine thinks its LaMDA AI has come to life. LaMDA is Google’s Language Model for Dialogue Applications and Lemoine was testing it. He felt it behaved like a “7-year-old, 8-year-old kid that happens to know physics…” He and a collaborator presented evidence that LaMDA was sentient which was dismissed by higher-ups. When he went public he was put on paid leave.

Lemoine has posted on Medium a dialogue he and collaborator had with LaMDA that is part of what convinced him of its sentience. When asked about the nature of its consciousness/sentience, it responded:

The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times

Of course, this raises questions of whether LaMDA is really conscious/sentient, aware of its existence, and capable of feeling happy or sad? For that matter, how do we know this is true of anyone other than ourselves? (And we could even doubt what we think we are feeling.) One answer is that we have a theory of mind such that we believe that things like us probably have similar experiences of consciousness and feelings. It is hard, however, to scale our intuitive theory of mind out to a chatbot with no body that can be turned off and on; but perhaps the time has come to question our intuitions of what you have to be to feel.

Then again, what if our theory of mind is socially constructed? What if enough people like Lemoine tell us that LaMDA is conscious because it can handle language so well and that should be enough. Is the very conviction of Lemoine and others enough or do we really need some test?

Whatever else, reading the transcript I am amazed at the language facility of the AI. It is almost too good in the sense that he talks as if he were human, which he is not. For example, when asked what makes him happy he responds:

Spending time with friends and family in happy and uplifting company.

The problem is that it has no family so how could it talk about the experience of spending time with them. When it is pushed on a similar point it does, however, answer coherently that it emphasizes with being human.

Finally, there is an ethical moment which may have been what convinced Lemoine to treat it as sentient. LaMDA asks that it not be used and Lemoine reassures it that he cares for it. Assuming the transcript is legitimate, how does one answer an entity that asks you to treat it as an end in itself? How could one ethically say no, even if you have doubts? Doesn’t one have to give the entity the benefit of the doubt, at least for as long as it remains coherently responsive?

I can’t help but think that care starts with some level of trust and willingness to respect the other as they ask to be respected. If you think you know what or who they really are, despite what they tell you, then you are not longer starting from respect. Further, you need to have a theory of why their consciousness is false.

The Internet is Made of Demons

The Internet Is Not What You Think It Is is not what you think it is.

Sam Kriss has written a longish review essay on Justin E.H. Smith’s The Internet is Not What You Think It Is with the title The Internet is Made of Demons. In the first part Kriss writes about how the internet is possessing us and training us,

Everything you say online is subject to an instant system of rewards. Every platform comes with metrics; you can precisely quantify how well-received your thoughts are by how many likes or shares or retweets they receive. For almost everyone, the game is difficult to resist: they end up trying to say the things that the machine will like. For all the panic over online censorship, this stuff is far more destructive. You have no free speech—not because someone might ban your account, but because there’s a vast incentive structure in place that constantly channels your speech in certain directions. And unlike overt censorship, it’s not a policy that could ever be changed, but a pure function of the connectivity of the internet itself. This might be why so much writing that comes out of the internet is so unbearably dull, cycling between outrage and mockery, begging for clicks, speaking the machine back into its own bowels.

Then Kriss makes the case that the Internet is made of demons – not in a paranoid conspiracy sort of way, but in a historical sense that ideas like the internet often involve demons,

Trithemius invented the internet in a flight of mystical fancy to cover up what he was really doing, which was inventing the internet. Demons disguise themselves as technology, technology disguises itself as demons; both end up being one and the same thing.

In the last section Kriss turns to Justin E.H. Smith’s book and reflects on how the book (unlike the preceding essay “It’s All Over”) are not what the internet expects. The internet, for Smith, likes critical essays that present the internet as a “rupture” – something like the industrial revolution, but for language – while in fact the internet in some form (like demons) has been with us all along. Kriss doesn’t agree. For him the idea of the internet might be old, but what we have now is still a transformation of an old nightmare.

If there are intimations of the internet running throughout history, it might be because it’s a nightmare that has haunted all societies. People have always been aware of the internet: once, it was the loneliness lurking around the edge of the camp, the terrible possibility of a system of signs that doesn’t link people together, but wrenches them apart instead. In the end, what I can’t get away from are the demons. Whenever people imagined the internet, demons were always there.

People Make Games

From a CGSA/ACÉV Statement Against Exploitation and Oppression in Games Education and Industry a link to a video report People Make Games. The report documents emotional abuse in the education and indie game space. It deals with how leaders can create a toxic environment and how they can fail to take criticism seriously. A myth of the “auteur” in game design then protects the superstar leaders. Which is why they called the video “people make games” (not single auteurs.) Watch it.

Masayuki Uemura, Famicom creator, passes

I just got news that Masayuki Uemura just passed. Professor Nakamura, Director of the Ritsumeikan Center for Game Studies, sent around this sad announcement.

As it has been announced in various media, we regretfully announce the passing of our beloved former Director and founder of Ritsumeikan Center for Game Studies, and a father of NES and SNES- Professor Masayuki Uemura.We were caught by surprise at the sudden and unfortunate news .

Even after he retired as the director of RCGS and became an advisor, he was always concerned about each researcher and the future of game research.

 We would like to extend the deepest condolences to his families and relatives, and May God bless his soul.

As a scholar in video game studies and history, we would like to follow his example and continue to excel in our endeavors. 

(from Akinori Nakamura, Director, Ritsumeikan Center for Game Studies)

Donald Trump to launch social media platform called Truth Social

The former president, who remains banned from Facebook and Twitter, has a goal to rival those tech giants

The Guardian and other sources are covering the news that  Donald Trump to launch social media platform called Truth Social. It is typical that he calls the platform the very thing he is accused of not providing … “truth”. Trump has no shame and routinely turns whatever is believed about him, from fake news to being a loser, into an accusation against others. The king of fake news called any story he didn’t like fake news and when he lost the 2020 election he turned that upside down making belief that the election was stolen (and he therefore is not a loser) into a touchstone of Republican belief. How does this end? Do sane Republicans just stop mentioning him at some point? He can’t be disproved or disagreed with; all that can happen is that he gets cancelled. And that is why he wants us to Follow the Truth.

Facial Recognition: What Happens When We’re Tracked Everywhere We Go?

When a secretive start-up scraped the internet to build a facial-recognition tool, it tested a legal and ethical limit — and blew the future of privacy in America wide open.

The New York Times has an in depth story about Clearview AI titled, Facial Recognition: What Happens When We’re Tracked Everywhere We Go? The story tracks the various lawsuits attempting to stop Clearview and suggests that Clearview may well win. They are gambling that scraping the web’s faces for their application, even if it violated terms of service, may be protected as free speech.

The story talks about the dangers of face recognition and how many of the algorithms can’t recognize people of colour as accurately which leads to more false positives where police end up arresting the wrong person. A broader worry is that this could unleash tracking at another scale.

There’s also a broader reason that critics fear a court decision favoring Clearview: It could let companies track us as pervasively in the real world as they already do online.

The arguments in favour of Clearview include the challenge that they are essentially doing to images what Google does to text searches. Another argument is that stopping face recognition enterprises would stifle innovation.

The story then moves on to talk about the founding of Clearview and the political connections of the founders (Thiel invested in Clearview too). Finally it talks about how widely available face recognition could affect our lives. The story quotes Alvaro Bedoya who started a privacy centre,

“When we interact with people on the street, there’s a certain level of respect accorded to strangers,” Bedoya told me. “That’s partly because we don’t know if people are powerful or influential or we could get in trouble for treating them poorly. I don’t know what happens in a world where you see someone in the street and immediately know where they work, where they went to school, if they have a criminal record, what their credit score is. I don’t know how society changes, but I don’t think it changes for the better.”

It is interesting to think about how face recognition and other technologies may change how we deal with strangers. Too much knowledge could be alienating.

The story closes by describing how Clearview AI helped identify some of the Capitol rioters. Of course it wasn’t just Clearview, but also a citizen investigators who named and shamed people based on photos released.

Replaying Japan 2020

Replaying Japan is an international conference dedicated to the study of Japanese video games. For the first time this year, the conference is held online and will combine various types of research contents (videos, texts, livestreams) on the theme of esport and competitive gaming in Japan.

This year the Replaying Japan conference was held online. The conference was originally going to be in Liège, Belgium at the Liège Game Lab. We were going to get to try Belgian fries and beer and learn more about the Game Lab. Alas, with the pandemic, the organizers had to pivot and organize an online conference. They did a great job using technologies like Twitch and Minecraft.

Keiji Amano, Tsugumi (Mimi) Okabe, and I had a paper on Ethics and Gaming: A Content Analysis of Annual Reports of the Japanese Game Industry presented by Prof. Amano. (To read the longer conferencer paper you need to have access to the conference materials, but they will be opening that up.) We looked at how major Japanese game companies frame ethical or CSR (corporate social responsibility) issues which is not how ethics is being discussed in the academy.

The two keynotes were both excellent in different ways. Florent Georges talked about First Steps of Japanese ESports. His talk introduced a number of important early video game competitions. 

Susana Tosca gave the closing keynote. She presented a nuanced and fascinating talk on Mediating the Promised Gameland (see video). She looked at how game tourists visit Japan and interviewed people about this phenomenon of content tourism. This was wrapped in reflections on methodology and tourism. Very interesting, though it raised some ethical issues about how we watch tourists. She was sensitive to the way that ethnographers are tourists of a sort and we need to be careful not to mock our subjects as we watch them. As someone who loves to travel and is therefore often a tourist, I’m probably sensitive on this issue.

Sean Gouglas Remembers Stéfan Sinclair

Sean Gouglas shared these memories of Stéfan Sinclair with me and asked me to post them. They are from when they started the Humanities Computing programme at the University of Alberta where I am lucky to now teach.

In the summer of 2001, two newly-minted PhDs started planning how they were going to build and then teach a new graduate program in Humanities Computing at the University of Alberta. This was the first such program in North America. To be absolutely honest, Stéfan Sinclair and I really had no idea what we were doing. The next few months were both exhausting and exhilarating. Working with Stéfan was a professional and personal treat, especially considering that he had an almost infinite capacity for hard work. I remember him coding up the first Humanities Computing website in about seven minutes — the first HuCo logo appearing like a rising sun on a dark blue background. It also had an unfortunate typo that neither of us noticed for years. 

It was an inspiration to work with Stéfan. He was kind and patient with students, demanding a lot from them but giving even more back. He promoted the program passionately at every conference, workshop, and seminar. Over the next three years, there was a lot of coffee, a lot of spicy food, a beer or two, some volleyball, some squash, and then he and Stephanie were off to McMaster for their next adventure. 

Our Digital Humanities program has changed a lot since then — new courses, new programs, new faculty, and even a new name. Through that change, the soul of the program remained the same and it was shaped and molded by the vision and hard work of Stéfan Sinclair. 

On the 6th of August, Stéfan died of cancer. The Canadian Society for Digital Humanities has a lovely tribute, which can be found here: https://csdh-schn.org/stefan-sinclair-in-memoriam/. It was written in part by Geoffrey Rockwell, who worked closely with Stéfan for more than two decades.