Wordle – A daily word game

Wordle Logo

Guess the hidden word in 6 tries. A new puzzle is available each day.

Well … I finally played Wordle – A daily word game after reading about it. It was a nice clean puzzle that got me thinking about vowels. I like the idea that there is one a day as I was immediately tempted to try another and another … Instead the one-a-day gives it a detachment. I can see why the New York Times would buy it, it is the sort of game that would bring in potential subscribers.

Ottawa’s use of our location data raises big surveillance and privacy concerns

In order to track the pandemic, the Public Health Agency of Canada has been using location data without explicit and informed consent. Transparency is key to building and maintaining trust.

The Conversation has just published an article on  Ottawa’s use of our location data raises big surveillance and privacy concerns. This was written with a number of colleagues who were part of a research retreat (Dagstuhl) on Mobility Data Analysis: from Technical to Ethical.

We are at a moment when ethical principles are really not enough and we need to start talking about best practices in order to develop a culture of ethical use of data.

We Might Be in a Simulation. How Much Should That Worry Us?

We may not be able to prove that we are in a simulation, but at the very least, it will be a possibility that we can’t rule out. But it could be more than that. Chalmers argues that if we’re in a simulation, there’d be no reason to think it’s the only simulation; in the same way that lots of different computers today are running Microsoft Excel, lots of different machines might be running an instance of the simulation. If that was the case, simulated worlds would vastly outnumber non-sim worlds — meaning that, just as a matter of statistics, it would be not just possible that our world is one of the many simulations but likely.

The New York Times has a fun opinion piece to the effect that We Might Be in a Simulation. How Much Should That Worry Us? This follows on Nick Bostrom’s essay Are you living in a computer simulation? that argues that either advanced posthuman civilizations don’t run lots of simulations of the past or we are in one.

The opinion is partly a review of a recent book by David Chalmers, Reality+: Virtual Worlds and the Problems of Philosophy (which I haven’t read.) Chalmers thinks there is a good chance we are in a simulation, and if so, there are probably others.

I am also reminded of Hervé Le Tellier’s novel The Anomaly where a plane full of people pops out of the clouds for the second time creating an anomaly where there are two instances of each person on the plane. This is taken as a glitch that may indicate that we are in a simulation raising all sorts of questions about whether there are actually anomalies that might be indications that this really is a simulation or a complicated idea in God’s mind (think Bishop Berkeley’s idealism.)

For me the challenge is the complexity of the world I experience. I can’t help thinking that a posthuman society modelling things really doesn’t need such a rich world as I experience. For that matter, would there really be enough computing to do it? Is this simulation fantasy just a virtual reality version of the singularity hypothesis prompted by the new VR technologies coming on stream?

Lost Gustav Klimt Paintings Destroyed in Fire Digitally Restored (by AI)

Black and White and AI Coloured versions of Philosophy
Philosophy by Klimt

Google Arts & Culture launched a hub for all things Gustav Klimt today, which include digital restorations of three lost paintings.

ARTnews, among other places reports that Lost Gustav Klimt Paintings Destroyed in Fire Digitally RestoredThe three faculties (Medicine, Philosophy, and Jurisprudence) painted for the University of Vienna were destroyed in a fire leaving only black and white photographs. Now Google has helped recreate what the three paintings might have looked like using AI as part of a Google Arts and Culture site on Klimt. You can read about the history of the three faculties here.

Whether in black and white, or in colour, the painting of Philosophy (above) is stunning. The original in colour would have been stunning, especially as it was 170 by 118 inches. Philosophy is represented by the Sphinx-like figure merging with the universe. To one side is a stream of people from the young to the old who hold their heads in confusion. At the bottom is a woman, comparable to the woman in the painting of Medicine, who might be an inspired philosopher looking through us.

Value Sensitive Design and Dark Patterns

Dark Patterns are tricks used in websites and apps that make you buy or sign up for things that you didn’t mean to. The purpose of this site is to spread awareness and to shame companies that use them.

Reading about Value Sensitive Design I came across a link to Harry Brignul’s Dark Patterns. The site is about ways that web designers try to manipulate users. They have a Hall of Shame that is instructive and a Reading List if you want to follow up. It is interesting to see attempts to regulate certain patterns of deception.

Values are expressed and embedded in technology; they have real and often non-obvious impacts on users and society.

The alternative is introduce values and ethics into the design process. This is where Value Sensitive Design comes. As developed by Batya Friedman and colleagues it is an approach that includes methods for thinking-through the ethics of a project from the beginning. Some of the approaches mentioned in the article include:

  • Mapping out what a design will support, hinder or prevent.
  • Consider the stakeholders, especially those that may not have any say in the deployment or use of a technology.
  • Try to understand the underlying assumptions of technologies.
  • Broaden our gaze as to the effects of a technology on human experience.

They have even produced a set of Envisioning Cards for sale.

In Isolating Times, Can Robo-Pets Provide Comfort? – The New York Times

As seniors find themselves cut off from loved ones during the pandemic, some are turning to automated animals for company.

I’m reading about Virtual Assistants and thinking that in some ways the simplest VAs are the robopets that are being given to lonely elderly people who are isolated. See In Isolating Times, Can Robo-Pets Provide Comfort? Robo-cats and dogs (and even seals) seem to provide comfort the way a stuffed pet might. They aren’t even that smart, but can give comfort to an older person suffering from isolation.

These pets, like PARO (an expensive Japanese robotic seal seen above) or the much cheaper Joy for All pets, can possibly fool people with dementia. What are the ethics of this? Are we comfortable fooling people for their own good?

The Future of Digital Assistants Is Queer

AI assistants continue to reinforce sexist stereotypes, but queering these devices could help reimagine their relationship to gender altogether.

Wired has a nice article on how the The Future of Digital Assistants Is Queer. The article looks at the gendering of virtual assistants like Siri and how it is not enough to just offer male voices, but we need to queer the voices. It mentions the ethical issue of how voice conveys information like whether the VA is a bot or not.

Masayuki Uemura, Famicom creator, passes

I just got news that Masayuki Uemura just passed. Professor Nakamura, Director of the Ritsumeikan Center for Game Studies, sent around this sad announcement.

As it has been announced in various media, we regretfully announce the passing of our beloved former Director and founder of Ritsumeikan Center for Game Studies, and a father of NES and SNES- Professor Masayuki Uemura.We were caught by surprise at the sudden and unfortunate news .

Even after he retired as the director of RCGS and became an advisor, he was always concerned about each researcher and the future of game research.

 We would like to extend the deepest condolences to his families and relatives, and May God bless his soul.

As a scholar in video game studies and history, we would like to follow his example and continue to excel in our endeavors. 

(from Akinori Nakamura, Director, Ritsumeikan Center for Game Studies)

The Proliferation of AI Ethics Principles: What’s Next?

The Proliferation of AI Ethics Principles: What’s Next?

The Montreal AI Ethics Institute has republished a nice article by Ravit Dotan, The Proliferation of AI Ethics Principles: What’s Next? Dotan starts by looking at some of the meta studies and then goes on to argue that we are unlikely to ever come up with a “unique set of core AI principles”, nor should we want to. She points out the lack of diversity in the sets we have. Different types of institutions will need different types of principles. She ends with these questions:

How do we navigate the proliferation of AI ethics principles? What should we use for regulation, for example? Should we seek to create new AI ethics principles which incorporate more perspectives? What if it doesn’t result in a unique set of principles, only increasing the multiplicity of principles? Is it possible to develop approaches for AI ethics governance that don’t rely on general AI ethics principles?

I am personally convinced that a more fruitful way forward is to start trading stories. These stories could take the form of incidents or cases or news or science fiction or even AI generated stories. We need to develop our ethical imagination. Hero Laird made this point in a talk on AI, Ethics and Law that was part of a salon we organize at AI4Society. They quoted from Thomas King’s The Truth About Stories to the effect that,

The truth about stories is that that’s all we are.

What stories do artificial intelligences tell themselves?

Artificial Intelligence Incident Database

I discovered the Artificial Intelligence Incident Database developed by the Partnership on AI. The Database contains reports on things that have gone wrong with AIs like the Australian Centerlink robodebt debacle.

The Incident Database was developed to help educate developers and encourage learning from mistakes. They have posted a paper to arXiv on Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database.