Predatory community

Projects that seek to create new communities of marginalized people to draw them in to risky speculative markets rife with scams and fraud are demonstrating

Through a Washington Post article I discovered Molly White who has been documenting the alt-right and now the crypto community. She has a blog at Molly White and a site that documents the problems of crypto at Web3 is going just great. There is, of course, a connection between the alt-right and crypto broculture, something that she talks about in posts like Predatory community which is about crypto promotions try to build community and are now playing the inclusive card – aiming at marginalized communities and trying to convince them that now they can get in on the action and build community. She calls this “predatory community.”

Groups that operate under the guise of inclusion, regardless of their intentions, are serving the greater goal of crypto that keeps the whole thing afloat: finding ever more fools to buy in so that the early investors can take their profits. And it is those latecomers who are left holding the bag in the end.

With projects that seek to provide services and opportunities to members of marginalized groups who have previously not had access, but on bad terms that ultimately disadvantaged them, we see predatory inclusion.22 With projects that seek to create new communities of marginalized people to draw them in to risky speculative markets rife with scams and fraud, we are now seeing predatory community.

Street View Privacy

How do you feel about people being able to look at your house in Google Street View? Popular Science has an article by David Nield, on “How to hide your house on every map app: Stop people from peering at your place” (May 18, 2022).

This raises questions about where privacy starts and a right to look or know stops. Can I not walk down a street and look at the faces of houses? Why then should I not be able to look at the face on Street View and other similar technologies? What about the satellite view? Do people have the right to see into my back yard from above?

This is a similar issue, though less fraught, as face databases. What rights do I have to my face? How would those rights connect to laws about Name, Image and Likeness (NIL) (or rights of publicity) which became an issue recently in amateur sports in the US. As for Canada, Rights of Publicity are complex and vary from province to province, but there is generally a recognition that:

  • People should have the right “to control the commercial use of name, image, likeness and other unequivocal aspects of one’s identity (eg, the distinct sound of someone’s voice).” (See Lexology article)
  • At the same time there is recognition that NIL can be used to provide legitimate information to the public.

Returning to the blurring of your house facade in Street View; I’m guessing the main reason the companies provide this is for security for people in sensitive positions or people being stalked.

Health agency tracked Canadians’ trips to liquor store via phones during pandemic

The report reveals PHAC was able to view a detailed snapshot of people’s behaviour, including grocery store visits, gatherings with family and friends, time…

The National Post is reporting about the Public Health Agency of Canada and their use of mobility data that a group of us wrote about in The Conversation (Canada). The story goes into more detail about how Health agency tracked Canadians’ trips to liquor store via phones during pandemicThe government provided one of the reports commissioned by PHAC from BlueDot to the House of Commons. The Ethics Committee report discussing what happened and making recommendations is here.

Why are women philosophers often erased from collective memory?

The history of ideas still struggles to remember the names of notable women philosophers. Mary Hesse is a salient example

Aeon has an important essay on Why are women philosophers often erased from collective memory? The essay argues that a number of important women philosophers have been lost (made absent) despite their importance including Mary Hesse. (You can see her Models and Analogies in Science through the Internet Archive.)

I read this after reading a chapter from Sara Ahmed’s Living a Feminist Life where Ahmed talks about citation practices and how disciplines exclude diverse work in different ways. She does a great job of confronting the various excuses people have for their bleached white citations. Poking around I find others have written on this including Victor Ray in Inside Higher Ed in an essay on The Racial Politics of Citation who references Richard Delgado’s The Imperial Scholar: Reflections on a Review of Civil Rights Literature from 1984.

What should be done about this? Obviously I’m not the best to suggest remedies, but here are some of the ideas that show up:

  • We need to commit to take the time to look at the works we read on a subject or for a project and to ask whose voice is missing. This shouldn’t be done at the end as a last minute fix, but during the ideation phase.
  • We should gather and confront data on our citational patterns from our publications. Knowing what you have done is better than not knowing.
  • We need to do the archaeological work to find and recover marginalized thinkers who have been left out and reflect on why they were left out. Then we need to promote them in teaching and research.
  • We should be willing to call out grants, articles, and proposals we review when it could make a difference.
  • We need to support work to translate thinkers whose work is not in English to balance the distribution of influence.
  • We need to be willing to view our field and its questions very differently.

Jeanna Matthews 

Jeanna Matthews from Clarkson College gave a great talk at our AI4Society Ethical Data and AI Salon on “Creating Incentives for Accountability and Iterative Improvement in Automated-Decision Making Systems.” She talked about a case regarding DNA matching software for criminal cases that she was involved in where they were able to actually get the code and show that the software would, under certain circumstances, generate false positives (where people would have their DNA matched to that from a crime scene when it shouldn’t have.)

As the title of her talk suggests, she used the concrete example to make the point that we need to create incentives for companies to test and improve their AIs. In particular she suggested that:

  1. Companies should be encouraged/regulated to invest some of the profit they make from the efficiencies from AI in improving the AI.
  2. That a better way to deal with the problems of AIs than weaving humans into the loop would be to set up independent human testers who test the AI and have a mechanism of redress. She pointed out how humans in the loop can get lazy, can be incentivized to agree with the AI and so on.
  3. We need regulation! No other approach will motivate companies to improve their AIs.

We had an interesting conversation around the question of how one could test point 2. Can we come up with a way of testing which approach is better?

She shared a link to a collection of links to most of the relevant papers and information: Northwestern Panel, March 10 2022.

The Universal Paperclips Game

Just finished playing the Universal Paperclips game which was surprisingly fun. It took me about 3.5 hours to get to sentience. The idea of the game is that you are an AI running a paperclip company and you make decisions and investments. The game was inspired by the philosopher Nick Bostrom‘s paperclip maximizer thought experiment which shows the risk that some harmless AI that controls the making of paperclips might evolve into an AGI (Artificial General Intelligence) and pose a risk to us. It might even convert all the resources of the universe into paperclips. The original thought experiment is in Bostrom’s paper Ethical Issues in Advanced Artificial Intelligence to illustrate the point that “Artificial intellects need not have humanlike motives.”

Human are rarely willing slaves, but there is nothing implausible about the idea of a superintelligence having as its supergoal to serve humanity or some particular human, with no desire whatsoever to revolt or to “liberate” itself. It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal. For better or worse, artificial intellects need not share our human motivational tendencies.

The game is rather addictive despite having a simple interface where all you can do is click on buttons making decisions. The decisions you get to make change over time and there are different panels that open up for exploration.

I learned about the game from an interesting blog entry by David Rosenthal on how It Isn’t About The Technology which is a response to enthusiasm about Web 3.0 and decentralized technologies (blockchain) and how they might save us, to which Rosenthal responds that it is isn’t about the technology.

One of the more interesting ideas that Rosenthal mentions is from Charles Stross’s keynote for the 34th Chaos Communications Congress to the effect that businesses are “slow AIs”. Corporations are machines that, like the paperclip maximizer, are self-optimizing and evolve until they are dangerous – something we are seeing with Google and Facebook.

Ottawa’s use of our location data raises big surveillance and privacy concerns

In order to track the pandemic, the Public Health Agency of Canada has been using location data without explicit and informed consent. Transparency is key to building and maintaining trust.

The Conversation has just published an article on  Ottawa’s use of our location data raises big surveillance and privacy concerns. This was written with a number of colleagues who were part of a research retreat (Dagstuhl) on Mobility Data Analysis: from Technical to Ethical.

We are at a moment when ethical principles are really not enough and we need to start talking about best practices in order to develop a culture of ethical use of data.

Value Sensitive Design and Dark Patterns

Dark Patterns are tricks used in websites and apps that make you buy or sign up for things that you didn’t mean to. The purpose of this site is to spread awareness and to shame companies that use them.

Reading about Value Sensitive Design I came across a link to Harry Brignul’s Dark Patterns. The site is about ways that web designers try to manipulate users. They have a Hall of Shame that is instructive and a Reading List if you want to follow up. It is interesting to see attempts to regulate certain patterns of deception.

Values are expressed and embedded in technology; they have real and often non-obvious impacts on users and society.

The alternative is introduce values and ethics into the design process. This is where Value Sensitive Design comes. As developed by Batya Friedman and colleagues it is an approach that includes methods for thinking-through the ethics of a project from the beginning. Some of the approaches mentioned in the article include:

  • Mapping out what a design will support, hinder or prevent.
  • Consider the stakeholders, especially those that may not have any say in the deployment or use of a technology.
  • Try to understand the underlying assumptions of technologies.
  • Broaden our gaze as to the effects of a technology on human experience.

They have even produced a set of Envisioning Cards for sale.

In Isolating Times, Can Robo-Pets Provide Comfort? – The New York Times

As seniors find themselves cut off from loved ones during the pandemic, some are turning to automated animals for company.

I’m reading about Virtual Assistants and thinking that in some ways the simplest VAs are the robopets that are being given to lonely elderly people who are isolated. See In Isolating Times, Can Robo-Pets Provide Comfort? Robo-cats and dogs (and even seals) seem to provide comfort the way a stuffed pet might. They aren’t even that smart, but can give comfort to an older person suffering from isolation.

These pets, like PARO (an expensive Japanese robotic seal seen above) or the much cheaper Joy for All pets, can possibly fool people with dementia. What are the ethics of this? Are we comfortable fooling people for their own good?

The Future of Digital Assistants Is Queer

AI assistants continue to reinforce sexist stereotypes, but queering these devices could help reimagine their relationship to gender altogether.

Wired has a nice article on how the The Future of Digital Assistants Is Queer. The article looks at the gendering of virtual assistants like Siri and how it is not enough to just offer male voices, but we need to queer the voices. It mentions the ethical issue of how voice conveys information like whether the VA is a bot or not.