Ottawa’s use of our location data raises big surveillance and privacy concerns

In order to track the pandemic, the Public Health Agency of Canada has been using location data without explicit and informed consent. Transparency is key to building and maintaining trust.

The Conversation has just published an article on  Ottawa’s use of our location data raises big surveillance and privacy concerns. This was written with a number of colleagues who were part of a research retreat (Dagstuhl) on Mobility Data Analysis: from Technical to Ethical.

We are at a moment when ethical principles are really not enough and we need to start talking about best practices in order to develop a culture of ethical use of data.

The Future of Digital Assistants Is Queer

AI assistants continue to reinforce sexist stereotypes, but queering these devices could help reimagine their relationship to gender altogether.

Wired has a nice article on how the The Future of Digital Assistants Is Queer. The article looks at the gendering of virtual assistants like Siri and how it is not enough to just offer male voices, but we need to queer the voices. It mentions the ethical issue of how voice conveys information like whether the VA is a bot or not.

The Proliferation of AI Ethics Principles: What’s Next?

The Proliferation of AI Ethics Principles: What’s Next?

The Montreal AI Ethics Institute has republished a nice article by Ravit Dotan, The Proliferation of AI Ethics Principles: What’s Next? Dotan starts by looking at some of the meta studies and then goes on to argue that we are unlikely to ever come up with a “unique set of core AI principles”, nor should we want to. She points out the lack of diversity in the sets we have. Different types of institutions will need different types of principles. She ends with these questions:

How do we navigate the proliferation of AI ethics principles? What should we use for regulation, for example? Should we seek to create new AI ethics principles which incorporate more perspectives? What if it doesn’t result in a unique set of principles, only increasing the multiplicity of principles? Is it possible to develop approaches for AI ethics governance that don’t rely on general AI ethics principles?

I am personally convinced that a more fruitful way forward is to start trading stories. These stories could take the form of incidents or cases or news or science fiction or even AI generated stories. We need to develop our ethical imagination. Hero Laird made this point in a talk on AI, Ethics and Law that was part of a salon we organize at AI4Society. They quoted from Thomas King’s The Truth About Stories to the effect that,

The truth about stories is that that’s all we are.

What stories do artificial intelligences tell themselves?

Artificial Intelligence Incident Database

I discovered the Artificial Intelligence Incident Database developed by the Partnership on AI. The Database contains reports on things that have gone wrong with AIs like the Australian Centerlink robodebt debacle.

The Incident Database was developed to help educate developers and encourage learning from mistakes. They have posted a paper to arXiv on Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database.

Ask Delphi

Delphi Screen Shot

Ask Delphi is an intriguing AI that you can use to ponder ethical questions. You type in a situation and it will tell you if it is morally acceptable or not. It is apparently built not on Reddit data, but on crowdsourced data, so it shouldn’t be as easy to provoke into giving toxic answers.

In their paper, Delphi: Towards Machine Ethics and Norms they say that they have created a Commonsense Norm Bank, “a collection of 1.7M ethical judgments on diverse real-life situations.” This contributes to Delphi’s sound pronouncements, but it doesn’t seem available for others yet.

AI Weirdness has a nice story on how she fooled Delphi.

Emojify: Scientists create online games to show risks of AI emotion recognition

Public can try pulling faces to trick the technology, while critics highlight human rights concerns

From the Guardian story, Scientists create online games to show risks of AI emotion recognition, I discovered Emojify, a web site with some games to show how problematic emotion detection is. Researchers are worried by the booming business of emotion detection with artificial intelligence. For example, it is being used in education in China. See the CNN story about how In Hong Kong, this AI reads children’s emotions as they learn.

A Hong Kong company has developed facial expression-reading AI that monitors students’ emotions as they study. With many children currently learning from home, they say the technology could make the virtual classroom even better than the real thing.

With cameras all over, this should worry us. We are not only be identified by face recognition, but now they want to know our inner emotions too. What sort of theory of emotions licenses these systems?

Why people believe Covid conspiracy theories: could folklore hold the answer?

Using Danish witchcraft folklore as a model, the researchers from UCLA and Berkeley analysed thousands of social media posts with an artificial intelligence tool and extracted the key people, things and relationships.

The Guardian has a nice story on Why people believe Covid conspiracy theories: could folklore hold the answer? This reports on research using folklore theory and artificial intelligence to understand conspiracies.

The story maps how Bill Gates connects the coronavirus with 5G for conspiracy fans. They use folklore theory to understand the way conspiracies work.

Folklore isn’t just a model for the AI. Tangherlini, whose specialism is Danish folklore, is interested in how conspiratorial witchcraft folklore took hold in the 16th and 17th centuries and what lessons it has for today.

Whereas in the past, witches were accused of using herbs to create potions that caused miscarriages, today we see stories that Gates is using coronavirus vaccinations to sterilise people. …

The research also hints at a way of breaking through conspiracy theory logic, offering a glimmer of hope as increasing numbers of people get drawn in.

The story then addresses the question of what difference the research might make. What good would a folklore map of a conspiracy theory do? The challenge of research is the more information clearly doesn’t work in a world of information overload.

The paper the story is based on is Conspiracy in the time of corona: automatic detection of emerging Covid-19 conspiracy theories in social media and the news, by Shadi Shahsavari, Pavan Holur, Tianyi Wang , Timothy R Tangherlini and Vwani Roychowdhury.

Dead By Daylight fans unhappy Hellraiser model is an NFT

Apparently Non-Fungible Tokens (NFTs) of game models are not going down well with fans according to a story, Dead By Daylight fans unhappy Hellraiser model is an NFT.

Even thought Behaviour isn’t selling the NFTs themselves, they are facilitating the sale of them by providing the models from the game. Gaming fans seem to view blockchain and NFTs as dubious and environmentally unsound technology. Behaviour’s response was,

We hear and understand the concerns you raised over NFTs. Absolutely zero blockchain tech exists in Dead by Daylight. Nor will it ever. Behaviour Interactive does not sell NFTs.

On a related note, Valve is banning blockchain and NFT games.

Trump Tweet Archive

All 50,000+ of Trump’s tweets, instantly searchable

Thanks to Kaylin I found the Trump Twitter Archive: TTA – Search. Its a really nice clean site that lets you search or filter Trump’s tweets from when he was elected to when his account was shut down on January 8th, 2021. You can also download the data if you want to try other tools.

I find reading his tweets now to be quite entertaining. Here are two back to back tweets that seems to almost contradict each other. First he boasts about the delivery of vaccines, and then talks about Covid as Fake News!

Jan 3rd 2021 – 8:14:10 AM EST: The number of cases and deaths of the China Virus is far exaggerated in the United States because of @CDCgov’s ridiculous method of determination compared to other countries, many of whom report, purposely, very inaccurately and low. “When in doubt, call it Covid.” Fake News!

Jan 3rd 2021 – 8:05:34 AM EST: The vaccines are being delivered to the states by the Federal Government far faster than they can be administered!

Apple will scan iPhones for child pornography

Apple unveiled new software Thursday that scans photos and messages on iPhones for child pornography and explicit messages sent to minors in a major new effort to prevent sexual predators from using Apple’s services.

The Washington Post and other news venues are reporting that Apple will scan iPhones for child pornography. As the subtitle to the article puts it “Apple is prying into iPhones to find sexual predators, but privacy activists worry governments could weaponize the feature.” Child porn is the go-to case when organizations want to defend surveillance.

The software will scan without our knowledge or consent which raises privacy issues. What are the chances of false positives? What if the tool is adapted to catch other types of images? Edward Snowden and the EFF have criticized this move. It seems inconsistent with Apple’s firm position on privacy and refusal to even unlock

It strikes me that there is a great case study here.