Introducing the AI4Society Signature Area

AI4Society will provide institutional leadership in this exciting area of teaching, research, and scholarship.

The Quad has a story Introducing the AI4Society Signature Area. Artificial Intelligence for Society is a University of Alberta Signature Area that brings together researchers and instructors from both the sciences and the arts. AI4S looks at how AI can be imagined, designed, and tested so that it serves society. I’m lucky to contribute to this Area as the Associate Director, working with the Director, Eleni Stroulia from Computing Science.

Google Developers Blog: Text Embedding Models Contain Bias. Here’s Why That Matters.

Human data encodes human biases by default. Being aware of this is a good start, and the conversation around how to handle it is ongoing. At Google, we are actively researching unintended bias analysis and mitigation strategies because we are committed to making products that work well for everyone. In this post, we’ll examine a few text embedding models, suggest some tools for evaluating certain forms of bias, and discuss how these issues matter when building applications.

On the Google Developvers Blog there is an interesting post on Text Embedding Models Contain Bias. Here’s Why That Matters. The post talks about a technique for using Word Embedding Association Tests (WEAT) to see compare different text embedding algorithms. The idea is to see whether groups of words like gendered words associate with positive or negative words. In the image above you can see the sentiment bias for female and male names for different techniques.

While Google is working on WEAT to try to detect and deal with bias, in our case this technique could be used to identify forms of bias in corpora.

COVID-19 contact tracing reveals ethical tradeoffs between public health and privacy

Michael Brown has written a nice article in the U of Alberta folio on COVID-19 contact tracing reveals ethical tradeoffs between public health and privacyThe article quotes me extensively on the subject of the ethics of these new bluetooth contact tracing tools. In the interview I tried the emphasize the importance of knowledge and consent.

  • Users of these apps should know that they are being traced through them, and
  • Users should consent to their use.

There are a variety of these apps from the system pioneered by Singapore called TraceTogether to its Alberta cousin ABTraceTogether. There are also a variety of approaches to tracing people from using credit card records to apps like TraceTogether. The EFF has a good essay on Protecting Civil Rights During a Public Health Crisis that I adapt here to provide guidelines for when one might gather data without knowledge or consent:

  • Medically necessary: There should be a clear and compelling explanation as to how this will save lives.
  • Personal information proportionate to need: The information gathered should fit the need and go no further.
  • Information handled by health informatics specialists: The gathering and processing should be handled by health informatics units, not signals intelligence or security services.
  • Deleted: It should be deleted once it is no longer needed.
  • Not be organized due to vulnerable demographics: The information should not be binned according to stereotypical or vulnerable demographics unless there is a compelling need. We should be very careful that we don’t use the data to further disadvantage groups.
  • Use reviewed afterwards: The should be a review after the crisis is over.
  • Transparency: Government should transparent about what they are gathering and why.
  • Due process: There should be open processes for people to challenge the gathering of their information or to challenge decisions taken as a result of such information.

The Last One

Whatever happened to The Last One software? The Last One (TLO) was a “program generator” that was supposed to take input from a user who wasn’t a programmer and be able to generate a BASIC program.

TLO was developed by a company called D.J. “AI” Systems Ltd. that was set up by David James who became interested in artificial intelligence when he bought a computer for his business, and apparently got so distracted that he was bankrupted by that interest (and lost his computers). It was funded by an equally colourful character, Scotty Bambury who made his money as a tire dealer in Somerset. (See here and here.)

Personal Computer magazine cover from here

The name (The Last One) refers to the expectation that this would be the last software you would need to buy. As the cover image above shows, they were imagining programmers being put out of work by an AI that could reprogram itself. TLO would be the last software you had to buy and possibly the first AI capable of recursively improving itself. DJ AI could have been spinning up the seed AI that could lead to the singularity! 

Here is some of the text from an ad for TLO. The text ran under the spacey headline at the top of this post.

The first program you should buy. …

THE LAST ONE … The program that writes programs!

Now, for the first time, your computer is truly ‘personal’. Now, simply and easily, you can create software the way you want it. …

Yet another sense of “personal” in “personal computer” – a computer where all your software (except, of course, TLO) is personally developed. Imagine a computer that you trained to do what you needed. This was the situation with early mainframes – programmers had to develop the applications individually for each system, they just didn’t have TLO.

The tech ‘solutions’ for coronavirus take the surveillance state to the next level

Neoliberalism shrinks public budgets; solutionism shrinks public imagination.

Evgeny Morozov has crisp essay in The Guardina on how The tech ‘solutions’ for coronavirus take the surveillance state to the next level. He argues that neoliberalist austerity cut our public services back in ways that now we see are endangering lives, but it is solutionism that constraining our ideas about what we can do to deal with situations. If we look for a technical solution we give up on questioning the underlying defunding of the commons.

There is nice interview between Natasha Dow Shüll Morozov on The Folly of Technological Solutionism: An Interview with Evgeny Morozov in which they talk about his book To Save Everything, Click Here: The Folly of Technological Solutionism and gamification.

Back in The Guardian, he ends his essay warning that we should focus on picking between apps – between solutions. We should get beyond solutions like apps to thinking politically.

The feast of solutionism unleashed by Covid-19 reveals the extreme dependence of the actually existing democracies on the undemocratic exercise of private power by technology platforms. Our first order of business should be to chart a post-solutionist path – one that gives the public sovereignty over digital platforms.

Robots Welcome to Take Over, as Pandemic Accelerates Automation – The New York Times

But labor and robotics experts say social-distancing directives, which are likely to continue in some form after the crisis subsides, could prompt more industries to accelerate their use of automation. And long-simmering worries about job losses or a broad unease about having machines control vital aspects of daily life could dissipate as society sees the benefits of restructuring workplaces in ways that minimize close human contact.

The New York Times has a story pointing out that The Robots Welcome to Take Over, as Pandemic Accelerates Automation. While AI may not be that useful in making the crisis decisions, robots (and the AIs that drive them) can take over certain jobs that need doing, but which are dangerous to humans in a time of pandemic. Sorting trash is one example given. Cleaning spaces is another.

We can imagine a dystopia where everything can run just fine with social (physical) distancing. Ultimately humans would only do the creative intellectual work as imagined in Forester’s The Machine Stops (from 1909!) We would entertain each other with solitary interventions, or at least works that can be made with the artists far apart. Perhaps green-screen technology and animation will let us even act alone and be composited together into virtual crowds.

How useful is AI in a pandemic?

DER SPIEGEL: What are the lessons to be learned from this crisis

Dräger: It shows that common sense is more important than we all thought. This situation is so new and complicated that the problems can only be solved by people who carefully weigh their decisions. Artificial intelligence, which everyone has been talking so much about recently, isn’t much help at the moment.

Absolutely Mission Impossible: Interview with German Ventilator Manufacturer, Speigel International, Interviewed by Lukas Eberle und Martin U. Müller, March 27th, 2020.

There are so many lessons to be learned from the Coronavirus, but one lesson is that artificial intelligence isn’t always the solution. In a health crisis that has to do with viruses in the air, not information, AI is only indirectly useful. As the head of production of the German Drägerwerk ventilator manufacturer company puts it, the challenge of choosing who to sell ventilators to in this time is not one to handed over to an AI. Humans carefully weighing decisions (and taking responsibility for them) is what is needed in a crisis.

The Machine Stops

Imagine, if you can, a small room, hexagonal in shape, like the cell of a bee. It is lighted neither by window nor by lamp, yet it is filled with a soft radiance. There are no apertures for ventilation, yet the air is fresh. There are no musical instruments, and yet, at the moment that my meditation opens, this room is throbbing with melodious sounds. An armchair is in the centre, by its side a reading-desk — that is all the furniture. And in the armchair there sits a swaddled lump of flesh — a woman, about five feet high, with a face as white as a fungus. It is to her that the little room belongs.

Like many, I reread E.M. Forester’s The Machine Stops this week while in isolation. This short story was published in 1909 and written as a reaction to The Time Machine by H.G. Wells. (See the full text here (PDF).) In Forester it is the machine that keeps working the utopia of isolated pods; in Wells it is a caste of workers, the Morlochs, who also turn out to eat the leisure class.  Forester felt that technology was likely to be the problem, or part of the problem, not class.

In this pandemic we see a bit of both. Following Wells we see a class of gig-economy deliverers who facilitate the isolated life of those of us who do intellectual work. Intellectual work has gone virtual, but we still need a physical layer maintained. (Even the language of a stack of layers comes metaphorically from computing.) But we also see in our virtualized work a dependence on an information machine that lets our bodies sit on the couch in isolation while we listen to throbbing melodies. My body certainly feels like it is settling into a swaddled lump of fungus.

An intriguing aspect of “The Machine Stops” is how Vashti, the mother who loves the life of the machine, measures everything in terms of ideas. She complains that flying to see her son and seeing the earth below gives her no ideas. Ideas don’t come from original experiences but from layers of interpretation. Ideas are the currency of an intellectual life of leisure which loses touch with the “real world.”

At the end, as the machine stops and Kuno, Vashti’s son, comes to his mother in the disaster, they reflect on how a few homeless refugees living on the surface might survive and learn not to trust the machine.

“I have seen them, spoken to them, loved them. They are hiding in the mist and the ferns until our civilization stops. To-day they are the Homeless — to-morrow—”

“Oh, to-morrow — some fool will start the Machine again, to-morrow.”

“Never,” said Kuno, “never. Humanity has learnt its lesson.”

 

Adventures in Science Fiction Cover Art: Disembodied Brains, Part I | Science Fiction and Other Suspect Ruminations

Gerard Quinn’s cover for the December 1956 issue of New Worlds

Thanks to Ali I cam across this compilation of Adventures in Science Fiction Cover Art: Disembodied Brains. Joachim Boaz has assembled a number of pulp sci-fi cover art showing giant brains. The giant brain was often the way computing was imagined. In fact early computers were called giant brains.

Disembodied brains — in large metal womb-like containers, floating in space or levitating in the air (you know, implying PSYCHIC POWER), pulsating in glass chambers, planets with brain-like undulations, pasted in the sky (GOD!, surprise) above the Garden of Eden replete with mechanical contrivances among the flowers and butterflies and naked people… The possibilities are endless, and more often than not, taken in rather absurd directions.

I wonder if we can plot some of the early beliefs about computers through these images and stories of giant brains. What did we think the brain/mind was such that a big one would have exaggerated powers? The equation would go something like this:

  • A brain is the seat of intelligence
  • The bigger the brain, the more intelligent
  • In big brains we might see emergent properties (like telepathy)
  • Scaling up the brain will give us artificially effective intelligence

This is what science fiction does so well – it takes some aspect of current science or culture and scales it up to imagine the consequences. Scaling brains, however, seems a bit literal, but the imagined futures are nonetheless important.

Here’s the File Clearview AI Has Been Keeping on Me, and Probably on You Too – VICE

We used the California Consumer Privacy Act to see what information the controversial facial recognition company has collected on me.

Anna Merlan has an important story on Vice, Here’s the File Clearview AI Has Been Keeping on Me, and Probably on You Too (Feb. 28, 2020). She used the California privacy laws to ask Clearview AI what information they kept on her and then to delete it. They asked her for a photo and proof of identification and eventually sent her a set of images and an index of where they came from. What is interesting is that they aren’t just scraping social media, they are scraping other scrapers like Insta Stalkers and various right wing sources that presumably have photos and stories about “dangerous intellectuals” like Merlan.

This bring back up the question of what is so unethical about face recognition and the storage of biometrics. We all have pictures of people in our photo collections, and Clearview AI was scraping public photos – is it then the use of the images that is the problem? Is it the recognition and search capability.