AI, Ethics And Society

Last week we held a conference on AI, Ethics and Society at the University of Alberta. As I often do, I kept conference notes at: philosophi.ca : AI Ethics And Society.

The conference was opened by Reuben Quinn whose grandfather signed Treaty 6. He challenged us to think about what labels and labelling mean. Later Kim Tallbear challenged us to think about how we want the encounter with other intelligences to go. We don’t have a good track record of encountering the other and respecting intelligence. Now is the time to think about our positionality and to develop protocols for encounters. We should also be open to different forms of intelligence, not just ours.

$432 000 painting “by AI” sold at Christie’s

A painting created using GANs (generative adversarial networks) sold for $432 000 at Christies today.

Last year a $432 000 painting “by AI” sold at Christie’s. The painting was created by a collective called Obvious. They used a Generative Adversarial Network. In an essay titled, A naive yet educated perspective on Art and Artificial Intelligence, they talk about how they created the work.

Generative Adversarial Networks (GANs) analyze tens of thousands of images, learn from their features, and are trained with the aim to create new images that are undistinguishable from the original data source.

They also point out that many of the same concerns people have about AI art today were voiced about photography in the 19th century. Photography automated the image making business much as AIs are automating other tasks.

Can we use these GANs for other generative scholarship?

Centrelink scandal

Data shows 7,456 debts were reduced to zero and another 12,524 partially reduced between July last year and March

The Guardian has a number of stories on the Australian Centrelink scandal including, Centrelink scandal: tens of thousands of welfare debts wiped or reduced. The scandal arose when the government introduce changes to a system for calculating overpayment to welfare recipients and clawing it back that removed a lot of the human oversight. The result was lots of miscalculated debts being automatically assigned to some of the most vulnerable. A report, Paying the Price of Welfare Reform, concluded that,

The research concludes that although welfare reform may be leading to cost savings for the Department of Human Services (DHS), substantial costs are being shifted to vulnerable customers and the community services that support them. It is they that are paying the price of welfare reform.

Continue reading Centrelink scandal

We Built a (Legal) Facial Recognition Machine for $60

The law has not caught up. In the United States, the use of facial recognition is almost wholly unregulated.

The New York Times has an opinion piece by Sahil Chinoy on how (they) We Built a (Legal) Facial Recognition Machine for $60. They describe an inexpensive experiment they ran where they took footage of people walking past some cameras installed in Bryant Park and compared them to known people who work in the area (scraped from web sites of organizations that have offices in the neighborhood.) Everything they did used public resources that others could use. The cameras stream their footage here. Anyone can scrape the images. The image database they gathered came from public web sites. The software is a service (Amazon’s Rekognition?) The article asks us to imagine the resources available to law enforcement.

I’m intrigued by how this experiment by the New York Times. It is a form of design thinking where they have designed something to help us understand the implications of a technology rather than just writing about what others say. Or we could say it is a form of journalistic experimentation.

Why does facial recognition spook us? Is recognizing people something we feel is deeply human? Or is it the potential for recognition in all sorts of situations. Do we need to start guarding our faces?

Facial recognition is categorically different from other forms of surveillance, Mr. Hartzog said, and uniquely dangerous. Faces are hard to hide and can be observed from far away, unlike a fingerprint. Name and face databases of law-abiding citizens, like driver’s license records, already exist. And for the most part, facial recognition surveillance can be set up using cameras already on the streets.

This is one of a number of excellent articles by the New York Times that is part of their Privacy Project.

Are Robots Competing for Your Job?

Are robots competing for your job?
Probably, but don’t count yourself out.

The New Yorker magazine has a great essay by Jill Lepore about whether Are Robots Competing for Your Job? (Feb. 25, 2019) The essay talks about the various predictions, including the prediction that R.I. (Remote Intelligence or global workers) will take your job too. The fear of robots is the other side of the coin of the fear of immigrants which raises questions about why we are panicking over jobs when unemployment is so low.

Misery likes a scapegoat: heads, blame machines; tails, foreigners. But is the present alarm warranted? Panic is not evidence of danger; it’s evidence of panic. Stoking fear of invading robots and of invading immigrants has been going on for a long time, and the predictions of disaster have, generally, been bananas. Oh, but this time it’s different, the robotomizers insist.

Lepore points out how many job categories have been lost only to be replaced by others which is why economists are apparently dismissive of the anxiety.

Some questions we should be asking include:

  • Who benefits from all these warnings about job loss?
  • How do these warnings function rhetorically? What else might they be saying? How are they interpretations of the past by futurists?
  • How is the panic about job losses tied to worries about immigration?

Artificial intelligence: Commission takes forward its work on ethics guidelines

The European Commission has announced the next step in its Artificial Intelligence strategy. See Artificial intelligence: Commission takes forward its work on ethics guidelines. The appointed a High-Level Expert Group in June of 2018. This group has now developed Seven essentials for achieving trustworthy AI:

Trustworthy AI should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

The next step has now been announced and that is a pilot phase that tests these essentials with stakeholders. The Commission also wants to cooperate with “like-minded partners” like Canada.

What would it mean to participate in the pilot?

Pius Adesanmi on Africa is the Forward

Today I learned about Pius Adesanmi who died in the recent Ethiopian Airlines crash. From all accounts he was an inspiring professor of English and African Studies at Carelton. You can hear him from a TEDxEuston talk embedded above. Or you can read from his collection of satirical essays titled Naija No Dey Carry Last: Thoughts on a Nation in Progress.

In the TEDx talk he makes a prescient point about new technologies,

We are undertakers. Man will always preside over the funeral of any piece of technology that pretends to replace him.

He connects this prediction about how all new technologies, including AI, will also pass on with a reflection on Africa as a place from which to understand technology.

And that is what Africa understands so well. Should Africa face forward? No. She understands that there will be man to preside over the funeral of these new innovations. She doesn’t need to face forward if she understand human agency. Africa is the forward that the rest of humanities must face.

We need this vision of/from Africa. It gets ahead of the ever returning hype cycle of new technologies. It imagines a position from which we escape the neverending discourse of disruptive innovation which limits our options before AI.

May Pius Adexanmi rest in peace.

A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning

Greene, Hoffmann, and Stark have written a much needed conference paper on Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning (PDF) for the Hawaii International Conference on System Sciences in Maui, HI. They look at a number of the important ethics statements/declarations out there and try to understand their “moral background.” Here is the abstract:

This paper uses frame analysis to examine recent high-profile values statements endorsing ethical design for artificial intelligence and machine learning (AI/ML). Guided by insights from values in design and the sociology of business ethics, we uncover the grounding assumptions and terms of debate that make some conversations about ethical design possible while forestalling alternative visions. Vision statements for ethical AI/ML co-opt the language of some critics, folding them into a limited, technologically deterministic, expert-driven view of what ethical AI/ML means and how it might work.

I get the feeling that various outfits (of experts) are trying to define what ethics in AI/ML is rather then engaging in a dialogue. There is a rush to be the expert on ethics. Perhaps we should imagine a different way of developing an ethical consensus.

For that matter, is there room for critical positions? What it would mean to call for a stop all research into AI/ML as unethical until proven otherwise? Is that even thinkable? Can we imagine another way that the discourse of ethics might play out?

This article is a great start.

Making AI accountable easier said than done, says U of A expert

Geoff McMaster of the Folio (U of A’s news site) wrote a nice article about how Making AI accountable easier said than done, says U of A expert. The article quotes me on on accountability and artificial intelligence. What we didn’t really talk about is forms of accountability for automata including:

  • Explainability – Can someone get an explanation as to how and why an AI made a decision that affects them? If people can get an explanation that they can understand then they can presumably take remedial action and hold someone or some organization accountable.
  • Transparency – Is an automated decision making process fully transparent so that it can be tested, studied and critiqued? Transparency is often seen as a higher bar for an AI to meet than explainability.
  • Responsibility – This is the old computer ethics question that focuses on who can be held responsible if a computer or AI harms someone. Who or what is held to account?

In all these cases there is a presumption of process both to determine transparency/responsibility and to then punish or correct for problems. Otherwise people will have no real recourse.

Writing with the machine

“…it’s like writing with a deranged but very well-read parrot on your shoulder.”

Robin Sloan, author of Mr. Penumbra’s 24-Hour Bookstore, has been doing some interesting work with recursive neural nets in order to generate text. See Writing with the machine. He trained a machine on science fiction and then hooked it into a text editor so it can complete sentences. The New York Times has a nice story on Sloan’s experiments, Computer Stories: A.I. Is Beginning to Assist Novelists.

One wonders what it would be like if you trained it on your own writing. Would it help you be yourself or discourage you from rereading your prose?