Ottawa’s use of our location data raises big surveillance and privacy concerns

In order to track the pandemic, the Public Health Agency of Canada has been using location data without explicit and informed consent. Transparency is key to building and maintaining trust.

The Conversation has just published an article on  Ottawa’s use of our location data raises big surveillance and privacy concerns. This was written with a number of colleagues who were part of a research retreat (Dagstuhl) on Mobility Data Analysis: from Technical to Ethical.

We are at a moment when ethical principles are really not enough and we need to start talking about best practices in order to develop a culture of ethical use of data.

Value Sensitive Design and Dark Patterns

Dark Patterns are tricks used in websites and apps that make you buy or sign up for things that you didn’t mean to. The purpose of this site is to spread awareness and to shame companies that use them.

Reading about Value Sensitive Design I came across a link to Harry Brignul’s Dark Patterns. The site is about ways that web designers try to manipulate users. They have a Hall of Shame that is instructive and a Reading List if you want to follow up. It is interesting to see attempts to regulate certain patterns of deception.

Values are expressed and embedded in technology; they have real and often non-obvious impacts on users and society.

The alternative is introduce values and ethics into the design process. This is where Value Sensitive Design comes. As developed by Batya Friedman and colleagues it is an approach that includes methods for thinking-through the ethics of a project from the beginning. Some of the approaches mentioned in the article include:

  • Mapping out what a design will support, hinder or prevent.
  • Consider the stakeholders, especially those that may not have any say in the deployment or use of a technology.
  • Try to understand the underlying assumptions of technologies.
  • Broaden our gaze as to the effects of a technology on human experience.

They have even produced a set of Envisioning Cards for sale.

In Isolating Times, Can Robo-Pets Provide Comfort? – The New York Times

As seniors find themselves cut off from loved ones during the pandemic, some are turning to automated animals for company.

I’m reading about Virtual Assistants and thinking that in some ways the simplest VAs are the robopets that are being given to lonely elderly people who are isolated. See In Isolating Times, Can Robo-Pets Provide Comfort? Robo-cats and dogs (and even seals) seem to provide comfort the way a stuffed pet might. They aren’t even that smart, but can give comfort to an older person suffering from isolation.

These pets, like PARO (an expensive Japanese robotic seal seen above) or the much cheaper Joy for All pets, can possibly fool people with dementia. What are the ethics of this? Are we comfortable fooling people for their own good?

The Future of Digital Assistants Is Queer

AI assistants continue to reinforce sexist stereotypes, but queering these devices could help reimagine their relationship to gender altogether.

Wired has a nice article on how the The Future of Digital Assistants Is Queer. The article looks at the gendering of virtual assistants like Siri and how it is not enough to just offer male voices, but we need to queer the voices. It mentions the ethical issue of how voice conveys information like whether the VA is a bot or not.

The Proliferation of AI Ethics Principles: What’s Next?

The Proliferation of AI Ethics Principles: What’s Next?

The Montreal AI Ethics Institute has republished a nice article by Ravit Dotan, The Proliferation of AI Ethics Principles: What’s Next? Dotan starts by looking at some of the meta studies and then goes on to argue that we are unlikely to ever come up with a “unique set of core AI principles”, nor should we want to. She points out the lack of diversity in the sets we have. Different types of institutions will need different types of principles. She ends with these questions:

How do we navigate the proliferation of AI ethics principles? What should we use for regulation, for example? Should we seek to create new AI ethics principles which incorporate more perspectives? What if it doesn’t result in a unique set of principles, only increasing the multiplicity of principles? Is it possible to develop approaches for AI ethics governance that don’t rely on general AI ethics principles?

I am personally convinced that a more fruitful way forward is to start trading stories. These stories could take the form of incidents or cases or news or science fiction or even AI generated stories. We need to develop our ethical imagination. Hero Laird made this point in a talk on AI, Ethics and Law that was part of a salon we organize at AI4Society. They quoted from Thomas King’s The Truth About Stories to the effect that,

The truth about stories is that that’s all we are.

What stories do artificial intelligences tell themselves?

Artificial Intelligence Incident Database

I discovered the Artificial Intelligence Incident Database developed by the Partnership on AI. The Database contains reports on things that have gone wrong with AIs like the Australian Centerlink robodebt debacle.

The Incident Database was developed to help educate developers and encourage learning from mistakes. They have posted a paper to arXiv on Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database.

The Emissary and Harrow

Yoko Tawada’s new novel imagines a time in which language starts to vanish and the elderly care for weakened children.

I’ve just finished two brilliant and surreal works of post-climate fiction. One was Yoko Tawada’s The Emissary also called “The Last Children of Tokyo”. This novel follows a great grantfather who is healthy and active at over 100 years old as he raises his great grandson Mumei (“no name”) who is disabled by whatever disasters have washed over Japan. The country is also shutting down – entering another Edo period of isolation – making even language an issue. Unlike most post apocalyptic fiction this isn’t about what actually happened or about how people fight off the zombies; it is about imagining a strange isolated life where Japan tries for some sort of purity again. As such the novel comments on present, but aging Japan – a Japan that has forgotten the Fukushima disaster and is firing up their nuclear reactors again. At the end we find that Mumei might be chosen as an Emissary to be smuggled out of Japan to the outside world where the strange syndrome affecting youth can be studied.

For more see reviews After Disaster, Japan Seals Itself Off From the World in ‘The Emissary’ in the New York Times or Japan’s Isolation 2.0.

The second book is Harrow by Joy Williams. The novel takes place during the time when we deny there is anything wrong and depicts an America determined to keep on pretending nothing is happening. It is an America extended in harrowing fashion from our strange ignorance. The novel is in three parts and has religious undertones with the main character first called the lamb and then “Khristen.” The last book continually references Kafka’s The Hunter Gracchus, an obscure story about a boat carrying Gracchus that wanders, unable to make it across to the underworld. Likewise, America in this novel seems to wander, unable to make it across to some reality. The third book might be set in the time of judgement, but a Sartrean judgement with no exit where a child is judge and all that happens is more of the surreal same. As a reviewer points out, the “harrow” may be the torture instrument Kafka describes “In the Penal Colony” that writes your punishment on your back where you can’t quite see it. Likewise, we are writing our punishment on our earth where we choose not to see it.

See reviews like this one in the Harvard Review Online.

Emojify: Scientists create online games to show risks of AI emotion recognition

Public can try pulling faces to trick the technology, while critics highlight human rights concerns

From the Guardian story, Scientists create online games to show risks of AI emotion recognition, I discovered Emojify, a web site with some games to show how problematic emotion detection is. Researchers are worried by the booming business of emotion detection with artificial intelligence. For example, it is being used in education in China. See the CNN story about how In Hong Kong, this AI reads children’s emotions as they learn.

A Hong Kong company has developed facial expression-reading AI that monitors students’ emotions as they study. With many children currently learning from home, they say the technology could make the virtual classroom even better than the real thing.

With cameras all over, this should worry us. We are not only be identified by face recognition, but now they want to know our inner emotions too. What sort of theory of emotions licenses these systems?

Dead By Daylight fans unhappy Hellraiser model is an NFT

Apparently Non-Fungible Tokens (NFTs) of game models are not going down well with fans according to a story, Dead By Daylight fans unhappy Hellraiser model is an NFT.

Even thought Behaviour isn’t selling the NFTs themselves, they are facilitating the sale of them by providing the models from the game. Gaming fans seem to view blockchain and NFTs as dubious and environmentally unsound technology. Behaviour’s response was,

We hear and understand the concerns you raised over NFTs. Absolutely zero blockchain tech exists in Dead by Daylight. Nor will it ever. Behaviour Interactive does not sell NFTs.

On a related note, Valve is banning blockchain and NFT games.

Inaugural Lord Renwick Memorial Lecture w/ Vint Cerf

From Humanist I learned about the Inaugural Lord Renwick Memorial Lecture w/ Vint Cerf : Digital Policy Alliance : Free Download, Borrow, and Streaming : Internet Archive. This lecture is available also in a text transcript here (PDF). Vint Cerf is one of the pioneers of the Internet and in this lecture he talks about the “five alligators of the Internet. They are 1) Technology, 2) Regulation, 3) Institutions, 4) the Digital Divide, and 5) Digital Preservation.

Under Technology he traces a succinct history of the internet as technology pointing out how important the ALOHAnet project was to the eventual design of the Internet. Under regulation he talked about different levels of regulation and the pros and cons of regulation. Later there are some questions about the issue of anonymity and civil discourse. All said, the talk does a great job of covering the issues facing the internet today.

Here is his answer to a question about  how to put more humanity into the Internet.

The first observation I would make is that civility is a social decision that we either choose or don’t. Creating norms is very important. I think norms are not necessarily backed up by, you know, law enforcement for example, they’re considered societal values, and I fear that openness in the Internet has led to a, let’s say, a diminution, erosion, of civil discourse. I would suggest to you, however, that it’s possibly understandable in the following analog. Those of you who drive cars may, like I do, say things to the other drivers, or about the other drivers, that I would never say face to face, but there’s this windshield separating me from the other drivers, and I feel free to express myself, in ways that I would not if I were face to face. Sometimes I think the computer screen acts a little bit like the windshield of the car and allows us to behave in ways that we wouldn’t otherwise if we were right there with the target of our comments. Somehow we have to infuse back into society the value of civil discourse, and the only way to do that I think is to start very early on in school to introduce children, and their parents, and adults, to the value of civility in terms of making progress in coming together, finding common ground, finding solutions to things, as opposed to simply firing our 45 caliber Internet packets at each other. I really hope that the person asking the question has some ideas for introducing incentives for exactly that behavioral change. I will point out that seatbelts and smoking has possibly some lessons to teach, where we incorporated not only advice but we also said, by the way, if we catch you smoking in this building, there will be consequences, because we said you shouldn’t do it. So, maybe we have to have some kind of social consequence for bad behavior. (p. 13-4)

Later on he talks about license plates following the same analogy of how we behave when driving. Your car gives you some anonymity, but the license plate can be used to identify you if you go too far.