Doki Doki Literature Club!

The Literature Club is full of cute girls! Will you write the way into their heart?

Dr. Ensslin gave a great short survey of digital fiction includ the Doki Doki Literature Club! (DDLC) at the Dyscorpia symposium. DDLC is a visual novel created in Ren’Py by Team Salvato that plays with the genre. As you play the game, which starts as a fairly typical dating game, it first turns into a horror game and then begins to get hacked by one of the characters who wants your attention. The character, it turns out, has both encouraged some of the other girls (in the Literature Club) to commit suicide, but they edits them out of the game itself. At the end of the game she has a lengthy face-to-face with you breaking the fourth wall of the screen.

Like most visual novels, it can be excruciating advancing through lots of text to get to the point where things change, but eventually you will notice glitches which makes things more interesting. I found myself paying attention to the text more as the glitches drew attention to the script. (The script itself is even mentioned in the game.)

DDLC initially mimics the Japanese visual novel genre, down to the graphics, but eventually the script veers off. It was well received in game circles winning a number of prizes.

The Body in Question(s)


Isabelle Van Grimde gave the opening talk at Dyscorpia on her work, including projects like The Body in Question(s). In another project Les Gestes, she collaborated with the McGill IDMIL lab who developed digital musical instruments for the dancers to wear and dance/play.

Van Grimde’s company Corps Secrets has the challenge of creating dances that can travel which means that the technologies/instruments have to . They use intergenerational casts (the elderly or children.) They are now working with sensors more than instruments so the dancers are free of equipment.

We Built a (Legal) Facial Recognition Machine for $60

The law has not caught up. In the United States, the use of facial recognition is almost wholly unregulated.

The New York Times has an opinion piece by Sahil Chinoy on how (they) We Built a (Legal) Facial Recognition Machine for $60. They describe an inexpensive experiment they ran where they took footage of people walking past some cameras installed in Bryant Park and compared them to known people who work in the area (scraped from web sites of organizations that have offices in the neighborhood.) Everything they did used public resources that others could use. The cameras stream their footage here. Anyone can scrape the images. The image database they gathered came from public web sites. The software is a service (Amazon’s Rekognition?) The article asks us to imagine the resources available to law enforcement.

I’m intrigued by how this experiment by the New York Times. It is a form of design thinking where they have designed something to help us understand the implications of a technology rather than just writing about what others say. Or we could say it is a form of journalistic experimentation.

Why does facial recognition spook us? Is recognizing people something we feel is deeply human? Or is it the potential for recognition in all sorts of situations. Do we need to start guarding our faces?

Facial recognition is categorically different from other forms of surveillance, Mr. Hartzog said, and uniquely dangerous. Faces are hard to hide and can be observed from far away, unlike a fingerprint. Name and face databases of law-abiding citizens, like driver’s license records, already exist. And for the most part, facial recognition surveillance can be set up using cameras already on the streets.

This is one of a number of excellent articles by the New York Times that is part of their Privacy Project.

Your brain probably is a computer, whatever that means

We’re certainly on to something when we say the brain is a computer – even if we don’t yet know what exactly we’re on to

Kevin Lande has written a fine essay for Aeon titled, Your brain probably is a computer, whatever that means. The essay starts with the apparent contradiction that “We have clear reasons to think that it’s literally true that the brain is a computer, yet we don’t have any clear understanding of what this means.”

We know of many cases where the brain clearly computes things, like where a sound is coming from, but we don’t know what sort of computer the brain is, or if it is only a computer. For that matter we don’t know a lot about what computation is either.

Food for thought.

Are Robots Competing for Your Job?

Are robots competing for your job?
Probably, but don’t count yourself out.

The New Yorker magazine has a great essay by Jill Lepore about whether Are Robots Competing for Your Job? (Feb. 25, 2019) The essay talks about the various predictions, including the prediction that R.I. (Remote Intelligence or global workers) will take your job too. The fear of robots is the other side of the coin of the fear of immigrants which raises questions about why we are panicking over jobs when unemployment is so low.

Misery likes a scapegoat: heads, blame machines; tails, foreigners. But is the present alarm warranted? Panic is not evidence of danger; it’s evidence of panic. Stoking fear of invading robots and of invading immigrants has been going on for a long time, and the predictions of disaster have, generally, been bananas. Oh, but this time it’s different, the robotomizers insist.

Lepore points out how many job categories have been lost only to be replaced by others which is why economists are apparently dismissive of the anxiety.

Some questions we should be asking include:

  • Who benefits from all these warnings about job loss?
  • How do these warnings function rhetorically? What else might they be saying? How are they interpretations of the past by futurists?
  • How is the panic about job losses tied to worries about immigration?

Artificial intelligence: Commission takes forward its work on ethics guidelines

The European Commission has announced the next step in its Artificial Intelligence strategy. See Artificial intelligence: Commission takes forward its work on ethics guidelines. The appointed a High-Level Expert Group in June of 2018. This group has now developed Seven essentials for achieving trustworthy AI:

Trustworthy AI should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

The next step has now been announced and that is a pilot phase that tests these essentials with stakeholders. The Commission also wants to cooperate with “like-minded partners” like Canada.

What would it mean to participate in the pilot?

The Unpredictability of Gameplay: Mark R. Johnson: Bloomsbury Academic

The Unpredictability of Gameplay explores the many forms of unpredictability in games and proposes a comprehensive theoretical framework for understanding and categorizing non-deterministic game mechanics.

Today we celebrated the publication by Bloomsbury of Mark R. Johnson’s The Unpredictability of Gameplay.Johnson proposes a typology of unpredictability:

  • Randomness (initial conditions of the game)
  • Chance (during play)
  • Luck (in/ability for player to affect final outcome)
  • Instability (outside the game)

Mark’s book nicely connects gaming and gambling. It helps us understand the way microgames in larger videogames can become a form of gambling. He looks at loot boxes where players are encouraged to spend small amounts of money over and over to get virtual goods inside a larger game. In Japan Kompu Gacha (or complete gacha) was eventually discouraged by the government because too many people were effectively gambling in their attempts to collect complete sets of virtual goods by paying over and over to open loot boxes. Compu gacha and other forms of loot boxes in videogames are a way for developers to monetize a game, but they also create metagae contexts that are, in effect, gambling. What is interesting, is that people don’t think of videogames as sites for gambling. One wants to say that this is another  (micro) form of casino capitalism.

Ethicists are no more ethical than the rest of us, study finds

When it comes to the crucial ethical question of calling one’s mother, most people agreed that not doing so was a moral failing.

Quartz reports on a study in Philosophical Psychology that Ethicists are no more ethical than the rest of us, study finds — Quartz. While one wonders how one can survey how ethical someone is, this is nonetheless a believable result. The contemporary university is structured deliberately not to be a place to change people’s morals, but to educate them. When we teach ethics we don’t assess or grade the morality of the student. Likewise, when we hire, promote, and assess the ethics of a philosophy professor we also don’t assess their personal morality. We assess their research, teaching and service record, all of which can be burnished without actually being ethical. There is, if you will, a professional ethic that research and teaching should not be personal, but be detached.

A focus on the teaching and learning of ethics over personal morality is, despite the appearance of hypocrisy, a good thing. We try to create in the university, in the class, and in publications, an openness to ideas, whoever they come from. By avoiding discussing personal morality we try to create a space where people of different views can enter into dialogue about ethics. Imagine what it would be like if it were otherwise? Imagine if my ethics class was about converting students to some standard of behaviour. Who would decide what that standard was? The ethos of professional ethics is one that emphasizes dialogue over action, history over behaviour, and ethical argumentation over disposition. Would it be ethical any other way?

Modelling Cultural Processes

Mt Fuji with the setting behind

Sitting on a hill with a view of Mt. Fuji across the water is the Shonan Village Center where I just finished a research retreat on Modelling Cultural Processes. This was organized by Mits Inaba, Martin Roth, and Gehard Heyer from Ritsumeikan University and the University of Leipzig. It brought together people in computing, linguistics, game studies, political science, literary studies and the digital humanities. My conference notes are here.

Unlike a conference, much of the time was spent in working groups discussing issues like identity, shifting content, and constructions of culture. As part of our working groups we developed a useful model of the research process across the humanities and social sciences such that we can understand where there are shifts in content.

Mt Fuji in the distance across the water

Pius Adesanmi on Africa is the Forward

Today I learned about Pius Adesanmi who died in the recent Ethiopian Airlines crash. From all accounts he was an inspiring professor of English and African Studies at Carelton. You can hear him from a TEDxEuston talk embedded above. Or you can read from his collection of satirical essays titled Naija No Dey Carry Last: Thoughts on a Nation in Progress.

In the TEDx talk he makes a prescient point about new technologies,

We are undertakers. Man will always preside over the funeral of any piece of technology that pretends to replace him.

He connects this prediction about how all new technologies, including AI, will also pass on with a reflection on Africa as a place from which to understand technology.

And that is what Africa understands so well. Should Africa face forward? No. She understands that there will be man to preside over the funeral of these new innovations. She doesn’t need to face forward if she understand human agency. Africa is the forward that the rest of humanities must face.

We need this vision of/from Africa. It gets ahead of the ever returning hype cycle of new technologies. It imagines a position from which we escape the neverending discourse of disruptive innovation which limits our options before AI.

May Pius Adexanmi rest in peace.