Pius Adesanmi on Africa is the Forward

Today I learned about Pius Adesanmi who died in the recent Ethiopian Airlines crash. From all accounts he was an inspiring professor of English and African Studies at Carelton. You can hear him from a TEDxEuston talk embedded above. Or you can read from his collection of satirical essays titled Naija No Dey Carry Last: Thoughts on a Nation in Progress.

In the TEDx talk he makes a prescient point about new technologies,

We are undertakers. Man will always preside over the funeral of any piece of technology that pretends to replace him.

He connects this prediction about how all new technologies, including AI, will also pass on with a reflection on Africa as a place from which to understand technology.

And that is what Africa understands so well. Should Africa face forward? No. She understands that there will be man to preside over the funeral of these new innovations. She doesn’t need to face forward if she understand human agency. Africa is the forward that the rest of humanities must face.

We need this vision of/from Africa. It gets ahead of the ever returning hype cycle of new technologies. It imagines a position from which we escape the neverending discourse of disruptive innovation which limits our options before AI.

May Pius Adexanmi rest in peace.

A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning

Greene, Hoffmann, and Stark have written a much needed conference paper on Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning (PDF) for the Hawaii International Conference on System Sciences in Maui, HI. They look at a number of the important ethics statements/declarations out there and try to understand their “moral background.” Here is the abstract:

This paper uses frame analysis to examine recent high-profile values statements endorsing ethical design for artificial intelligence and machine learning (AI/ML). Guided by insights from values in design and the sociology of business ethics, we uncover the grounding assumptions and terms of debate that make some conversations about ethical design possible while forestalling alternative visions. Vision statements for ethical AI/ML co-opt the language of some critics, folding them into a limited, technologically deterministic, expert-driven view of what ethical AI/ML means and how it might work.

I get the feeling that various outfits (of experts) are trying to define what ethics in AI/ML is rather then engaging in a dialogue. There is a rush to be the expert on ethics. Perhaps we should imagine a different way of developing an ethical consensus.

For that matter, is there room for critical positions? What it would mean to call for a stop all research into AI/ML as unethical until proven otherwise? Is that even thinkable? Can we imagine another way that the discourse of ethics might play out?

This article is a great start.

Making AI accountable easier said than done, says U of A expert

Geoff McMaster of the Folio (U of A’s news site) wrote a nice article about how Making AI accountable easier said than done, says U of A expert. The article quotes me on on accountability and artificial intelligence. What we didn’t really talk about is forms of accountability for automata including:

  • Explainability – Can someone get an explanation as to how and why an AI made a decision that affects them? If people can get an explanation that they can understand then they can presumably take remedial action and hold someone or some organization accountable.
  • Transparency – Is an automated decision making process fully transparent so that it can be tested, studied and critiqued? Transparency is often seen as a higher bar for an AI to meet than explainability.
  • Responsibility – This is the old computer ethics question that focuses on who can be held responsible if a computer or AI harms someone. Who or what is held to account?

In all these cases there is a presumption of process both to determine transparency/responsibility and to then punish or correct for problems. Otherwise people will have no real recourse.

Writing with the machine

“…it’s like writing with a deranged but very well-read parrot on your shoulder.”

Robin Sloan, author of Mr. Penumbra’s 24-Hour Bookstore, has been doing some interesting work with recursive neural nets in order to generate text. See Writing with the machine. He trained a machine on science fiction and then hooked it into a text editor so it can complete sentences. The New York Times has a nice story on Sloan’s experiments, Computer Stories: A.I. Is Beginning to Assist Novelists.

One wonders what it would be like if you trained it on your own writing. Would it help you be yourself or discourage you from rereading your prose?

 

Making AI accountable easier said than done, says U of A expert

The Folio has a story on the ethics of AI that quotes me with the title, Making AI accountable easier said than done, says U of A expert.

One of issues that interests me the most now is the history of this discussion. We tend to treat the ethics of AI as a new issue, but people have been thinking about how automation would affect people for some time. There have been textbooks for teaching Computer Ethics like that of Deborah G. Johnson since the 1980s. As part of research we did on how computer were presented in the news we found articles in the 1960s about how automation might put people out of work. They weren’t thinking of AI then, but the ethical and social effects that concerned people back then were similar. What few people discussed, however, was how automation affected different groups differently. Michele Landsberg wrote a prescient article on “Will Computer Replace the Working Girl?” in 1964 for the women’s section of The Globe and Mail that argued that is was women in the typing pools that were being put out of work. Likewise I suspect that some groups be more affected by AI than others and that we need to prepare for that.

Addressing the issue of how universities might prepare for the disruption of artificial intelligence is a good book, Robot-Proof: Higher Education in the Age of Artificial Intelligence by Joseph Aoun (MIT Press, 2017).

Instead of educating college students for jobs that are about to disappear under the rising tide of technology, twenty-first-century universities should liberate them from outdated career models and give them ownership of their own futures. They should equip them with the literacies and skills they need to thrive in this new economy defined by technology, as well as continue providing them with access to the learning they need to face the challenges of life in a diverse, global environment.

Letting neural networks be weird

Halloween Costume Names Generated by a Weird AI

Jingwei, a bright digital humanities student working as a research assistant, has been playing with generative AI approaches from aiweirdness.com – Letting neural networks be weird. Janelle Shane has made neural networks funny by using the to generate things like New My Little Ponies. Jingwei scraped titles of digital humanities conferences from various conference sites and trained and generated new titles just waiting to be proposed as papers:

  • The Catalogue of the Cultural Heritage Parts

  • Automatic European Pathworks and Indexte Corpus and Mullisian Descriptions

  • Minimal Intellectual tools and Actorical Normiels: The Case study of the Digital Humanities Classics

  • Automatic European Periodical Mexico: The Case of the Digital Hour

  • TEIviv Industics – Representation dans le perfect textbook

  • Conceptions of the Digital Homer Centre

  • Preserving Critical Computational App thinking in DH Languages

  • DH Potential Works: US Work Film Translation Science

  • Translation Text Mining and GiS 2.0

  • DH Facilitating the RIATI of the Digital Scholar

  • Shape Comparing Data Creating and Scholarly Edition

  • DH Federation of the Digital Humanities: The Network in the Halleni building and Web Study of Digital Humanities in the Hid-Cloudy

  • The First Web Study of Build: A “Digitie-Game as the Moreliency of the Digital Humanities: The Case study of the Digital Hour: The Scale Text Story Minimalism: the Case of Public Australian Recognition Translation and Puradopase

  • The Computational Text of Contemporary Corpora

  • The Social Network of Linguosation in Data Washingtone

  • Designing formation of Data visualization

  • The Computational Text of Context: The Case of the World War and Athngr across Theory

  • The Film Translation Text Center: The Context of the Cultural Hermental Peripherents

  • The Social Infrastructure  PPA: Artificial Data In a Digital Harl to Mexquise (1950-1936)

  • EMO Artificial Contributions of the Hauth Past Works of Warla Management Infriction

  • DAARRhK Platform for Data

  • Automatic Digital Harlocator and Scholar

  • Complex Networks of Computational Corpus

  • IMPArative Mining Trail with DH Portal

  • Pursour Auchese of the Social Flowchart of European Nation

  • The Stefanopology: The Digital Humanities

Anatomy of an AI System

Anatomy of an AI System – The Amazon Echo as an anatomical map of human labor, data and planetary resources. By Kate Crawford and Vladan Joler (2018)

Kate Crawford and Vladan Joler have created a powerful infographic and web site, Anatomy of an AI System. The dark illustration and site are an essay that starts with the Amazon Echo and then sketches out the global anatomy of this apparently simple AI appliance. They do this by looking at where the materials come from, where the labour comes from (and goes), and the underlying infrastructure.

Put simply: each small moment of convenience – be it answering a question, turning on a light, or playing a song – requires a vast planetary network, fueled by the extraction of non-renewable materials, labor, and data.

The essay/visualization is a powerful example of how we can learn by critically examining the technologies around us.

Just as the Greek chimera was a mythological animal that was part lion, goat, snake and monster, the Echo user is simultaneously a consumer, a resource, a worker, and a product.

Big Tech’s Half-Hearted Response To Fake News And Election Hacking

Despite big hand waves, Facebook, Google, and Twitter aren’t doing enough to stop misinformation.

From slashdot I found a story about : Big Tech’s Half-Hearted Response To Fake News And Election Hacking. This Fast Company story talks about ways that social media companies are trying to prevent the misuse of their platforms as we head into the US midterms.

For Facebook, Google, and Twitter the fight against fake news seems to be two-pronged: De-incentivize the targeted content and provide avenues to correct factual inaccuracies. These are both surface fixes, however, akin to putting caulk on the Grand Canyon.

And, despite grand hand waves, both approaches are reactive. They don’t aim at understanding how this problem became prevalent, or creating a method that attacks the systemic issue. Instead these advertising giants implement new mechanisms by which people can report one-off issues—and by which the platforms will be left playing cat-and-mouse games against fake news—all the while giving no real clear glimpse into their opaque ad platforms.

The problem is that these companies make too much money from ads and elections are a chance to get lots of ads, manipulative or not. For that matter, what political ad doesn’t try to manipulate viewers?

The slashdot story was actually about Mozilla’s Responsible Computer Science Challenge which will support initiatives to embedd ethics in computer science courses. Alas, the efficacy of ethics courses is questionable. Aristotle would say that if you don’t have the disposition to be ethical no amount of training would do any good. It just helps the unethical pretend to be ethical.

Self-driving pods are slow, boring, and weird-looking — and that’s a good thing

Driverless pods, retirement communities, and grocery delivery

Autonomous vehicles are here! That’s the message from a panel on AI and Transportation I listened to at the International Symposium on Applications of Artificial Intelligence held here at the University of Alberta.

Waymo, the Google spin-off, is bringing autonomous taxis to Phoenix this fall. Other companies are developing shuttles and other types of pods that work,  Self-driving pods are slow, boring, and weird-looking — and that’s a good thingIt seems to me that there hasn’t really been a discussion about what would benefit society. Companies will invest in where they see economic opportunity; but what should we as a society do with such technology? At the moment the technology seems to be used either in luxury cars to provide assistance to the driver or imagined to replace taxi and Uber drivers. What will happen to these drivers?

AI Weirdness

I just came across a neat site called AI Weirdness. The site describes all sorts of “weird” experiments in learning neural networks. Some examples:

The site has a nice FAQ that describes her tools and how to learn how to do it.