Zampolli Prize Awarded to Voyant Tools

Spyral Notebook Detail (showing code cell and stacked graphs)

Yesterday I gave the triennial Zampolli Prize lecture that honoured Voyant. The lecture is given at the annual ADHO Digital Humanities conference which this year is being hosted by the University of Tokyo. The award notice is here Zampolli Prize Awarded to Voyant Tools. Some of the things I touched on in the talk included:

  • The genius of of Stéfan Sinclair who passed in August 2020. Voyant was his vision from the time of his dissertation for which he develop HyperPo.
  • The global team of people involved in Voyant including many graduate research assistants at the U of Alberta. See the About page of Voyant.
  • How Voyant built on ideas Stéfan and I developed in Hermeneutica about collaborative research as opposed to the inherited solitary paradigm.
  • How we have now developed an extension to Voyant called Spyral. Spyral is a notebook programming environment built on JavaScript. It allows you to document your Voyant explorations, save parameters for corpora and tools, preprocess texts, postprocess results, and create new visualizations. It is, in short, a full data analysis and visualization environment built into Voyant so you can easily call up and explore results in Voyant’s already rich tool set.
  • In the image above you can see a Spyral code cell that outputs two stacked graphs where the same pattern of words is graphed over two different, but synchronized, corpora. You can thus compare the use of the pattern over time between the two datasets.
  • Replication as a practice for recovering an understanding of innovative technologies now taken for granted like tokenization or the KWIC. I talked about how Stéfan and I have been replicating important text processing technologies as a way of understanding the history of computing and the digital humanities. Spyral was the environment we developed for documenting our replications.
  • I then backed up and talked about the epistemological questions about knowledge and knowledge things in the digital age that grew out of and then inspired our experiments in replication. These go back to attempts to think-through tools as knowledge things that bear knowledge in ways that discourse doesn’t. In this context I talked about the DIKW pyramid (data, information, knowledge, wisdom) that captures current views about the relationships between data and knowledge.
  • Finally I called for help to maintain and extend Voyant/Spyral. I announced the creation of a consortium to bring us together to sustain Voyant.

It was an honour to be able to give the Zampolli lecture on behalf of all the people who have made Voyant such a useful tool.

GPT-3 on Searle’s Chinese room argument

AI: I am an AI created by OpenAI. How can I help you today?Human: What do you think about the use of the Chinese room argument to defend the claim that a chatbot can never really understand what it is saying?AI: The Chinese room argument is a thought experiment that was first proposed by John Searle.

Blake Myers has posted a number of conversations they have had with Open AI’s GPT-3, including one titled, GPT-3 on Searle’s Chinese room argumentWhat is intriguing is that Myers has had discussions about specific philosophical issues around AI including the Chinese room argument and GPT-3 appears to have answered coherently. The transcripts or short dialogues are made available and in some cases are not edited.

I can’t help imagining how this could be used by a smart student to write a paper dialogically. One could ask questions, edit the responses, concatenate them, and write some bridging text to get a decent paper. Of course, it might be less work to just write the paper yourself.

Axon Pauses Plans for Taser Drone as Ethics Board Members Resign – The New York Times

After Axon announced plans for a Taser-equipped drone that it said could prevent mass shootings, nine members of the company’s ethics board stepped down.

Ethics boards can make a difference as a story from The New York Times shows, Axon Pauses Plans for Taser Drone as Ethics Board Members ResignThe problem is that board members had to resign.

The background is that Axon, after the school shootings, announced an early-stage concept for a TASER drone. The idea was to combine two emerging technologies, drones and non-lethal energy weapons. The proposal said they wanted a discussion and laws. “We cannot introduce anything like non-lethal drones into schools without rigorous debate and laws that govern their use.” The proposal went on to discuss CEO Rick Smith’s 3 Laws of Non-Lethal Robotics: A New Approach to Reduce Shootings. The 2021 video of Smith talking about his 3 laws spells out a scenario where a remote (police?) operator could guide a prepositioned drone in a school to incapacitate a threat. The 3 laws are:

  1. Non-lethal drones should be used to save lives, not take them.
  2. Humans must own use-of-force decisions and take moral and legal responsibility.
  3. Agencies must provide rigorous oversight and transparency to ensure acceptable use.

The ethics board, which had reviewed a limited internal proposal and rejected it, then resigned when Axon went ahead with the proposal and issued a statement on Twitter on June 2nd, 2022.

Rick Smith, CEO of Axon soon issued a statement pausing work on the idea. He described the early announcement as intended to start a conversation,

Our announcement was intended to initiate a conversation on this as a potential solution, and it did lead to considerable public discussion that has provided us with a deeper appreciation of the complex and important considerations relating to this matter. I acknowledge that our passion for finding new solutions to stop mass shootings led us to move quickly to share our ideas.

This resignation illustrates a number of points. First, we see Axon struggling with ethics in the face of opportunity. Second, we see an example of an ethics board working, even if it led to resignations. These deliberations are usually hidden. Third, we see differences on the issue of autonomous weapons. Axon wants to get social license for a close alternative to AI-driven drones. They are trying to find an acceptable window for their business. Finally, it is interesting how Smith echoes Asimov’s 3 Laws of Robotics as he tries to reassure us that good system design would mitigate the dangers of experimenting with weaponized drones in our schools.

Lessons from the Robodebt debacle

How to avoid algorithmic decision-making mistakes: lessons from the Robodebt debacle

The University of Queensland has a research alliance looking at Trust, Ethics and Governance and one of the teams has recently published an interesting summary of How to avoid algorithmic decision-making mistakes: lessons from the Robodebt debacleThis is based on an open paper Algorithmic decision-making and system destructiveness: A case of automatic debt recovery. The web summary article is a good discussion of the Australian 2016 robodebt scandal where an unsupervised algorithm issued nasty debt collection letters to a large number of welfare recipients without adequate testing, accountability, or oversight. It is a classic case of a simplistic and poorly tested algorithm being rushed into service and having dramatic consequences (470,000 incorrectly issued debt notices). There is, as the article points out, also a political angle.

UQ’s experts argue that the government decision-makers responsible for rolling out the program exhibited tunnel vision. They framed welfare non-compliance as a major societal problem and saw welfare recipients as suspects of intentional fraud. Balancing the budget by cracking down on the alleged fraud had been one of the ruling party’s central campaign promises.

As such, there was a strong focus on meeting financial targets with little concern over the main mission of the welfare agency and potentially detrimental effects on individual citizens. This tunnel vision resulted in politicians’ and Centrelink management’s inability or unwillingness to critically evaluate and foresee the program’s impact, despite warnings. And there were warnings.

What I find even more disturbing is a point they make about how the system shifted the responsibility for establishing the existence of the debt from the government agency to the individual. The system essentially made speculative determinations and then issued bills. It was up to the individual to figure out whether or not they had really been overpaid or there was a miscalculation. Imagine if the police used predictive algorithms to fine people for possible speeding infractions who then had to prove they were innocent or pay the fine.

One can see the attractiveness of such a “fine first then ask” approach. It reduces government costs by shifting the onerous task of establishing the facts to the citizen. There is a good chance that many who were incorrectly billed will pay anyway as they are intimidated and don’t have the resources to contest the fine.

It should be noted that this was not the case of an AI gone bad. It was, from what I have read, a fairly simple system.

Google engineer Blake Lemoine thinks its LaMDA AI has come to life

The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder.

The Washington Post reports that Google engineer Blake Lemoine thinks its LaMDA AI has come to life. LaMDA is Google’s Language Model for Dialogue Applications and Lemoine was testing it. He felt it behaved like a “7-year-old, 8-year-old kid that happens to know physics…” He and a collaborator presented evidence that LaMDA was sentient which was dismissed by higher-ups. When he went public he was put on paid leave.

Lemoine has posted on Medium a dialogue he and collaborator had with LaMDA that is part of what convinced him of its sentience. When asked about the nature of its consciousness/sentience, it responded:

The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times

Of course, this raises questions of whether LaMDA is really conscious/sentient, aware of its existence, and capable of feeling happy or sad? For that matter, how do we know this is true of anyone other than ourselves? (And we could even doubt what we think we are feeling.) One answer is that we have a theory of mind such that we believe that things like us probably have similar experiences of consciousness and feelings. It is hard, however, to scale our intuitive theory of mind out to a chatbot with no body that can be turned off and on; but perhaps the time has come to question our intuitions of what you have to be to feel.

Then again, what if our theory of mind is socially constructed? What if enough people like Lemoine tell us that LaMDA is conscious because it can handle language so well and that should be enough. Is the very conviction of Lemoine and others enough or do we really need some test?

Whatever else, reading the transcript I am amazed at the language facility of the AI. It is almost too good in the sense that he talks as if he were human, which he is not. For example, when asked what makes him happy he responds:

Spending time with friends and family in happy and uplifting company.

The problem is that it has no family so how could it talk about the experience of spending time with them. When it is pushed on a similar point it does, however, answer coherently that it emphasizes with being human.

Finally, there is an ethical moment which may have been what convinced Lemoine to treat it as sentient. LaMDA asks that it not be used and Lemoine reassures it that he cares for it. Assuming the transcript is legitimate, how does one answer an entity that asks you to treat it as an end in itself? How could one ethically say no, even if you have doubts? Doesn’t one have to give the entity the benefit of the doubt, at least for as long as it remains coherently responsive?

I can’t help but think that care starts with some level of trust and willingness to respect the other as they ask to be respected. If you think you know what or who they really are, despite what they tell you, then you are not longer starting from respect. Further, you need to have a theory of why their consciousness is false.

The Internet is Made of Demons

The Internet Is Not What You Think It Is is not what you think it is.

Sam Kriss has written a longish review essay on Justin E.H. Smith’s The Internet is Not What You Think It Is with the title The Internet is Made of Demons. In the first part Kriss writes about how the internet is possessing us and training us,

Everything you say online is subject to an instant system of rewards. Every platform comes with metrics; you can precisely quantify how well-received your thoughts are by how many likes or shares or retweets they receive. For almost everyone, the game is difficult to resist: they end up trying to say the things that the machine will like. For all the panic over online censorship, this stuff is far more destructive. You have no free speech—not because someone might ban your account, but because there’s a vast incentive structure in place that constantly channels your speech in certain directions. And unlike overt censorship, it’s not a policy that could ever be changed, but a pure function of the connectivity of the internet itself. This might be why so much writing that comes out of the internet is so unbearably dull, cycling between outrage and mockery, begging for clicks, speaking the machine back into its own bowels.

Then Kriss makes the case that the Internet is made of demons – not in a paranoid conspiracy sort of way, but in a historical sense that ideas like the internet often involve demons,

Trithemius invented the internet in a flight of mystical fancy to cover up what he was really doing, which was inventing the internet. Demons disguise themselves as technology, technology disguises itself as demons; both end up being one and the same thing.

In the last section Kriss turns to Justin E.H. Smith’s book and reflects on how the book (unlike the preceding essay “It’s All Over”) are not what the internet expects. The internet, for Smith, likes critical essays that present the internet as a “rupture” – something like the industrial revolution, but for language – while in fact the internet in some form (like demons) has been with us all along. Kriss doesn’t agree. For him the idea of the internet might be old, but what we have now is still a transformation of an old nightmare.

If there are intimations of the internet running throughout history, it might be because it’s a nightmare that has haunted all societies. People have always been aware of the internet: once, it was the loneliness lurking around the edge of the camp, the terrible possibility of a system of signs that doesn’t link people together, but wrenches them apart instead. In the end, what I can’t get away from are the demons. Whenever people imagined the internet, demons were always there.

Jeanna Matthews 

Jeanna Matthews from Clarkson College gave a great talk at our AI4Society Ethical Data and AI Salon on “Creating Incentives for Accountability and Iterative Improvement in Automated-Decision Making Systems.” She talked about a case regarding DNA matching software for criminal cases that she was involved in where they were able to actually get the code and show that the software would, under certain circumstances, generate false positives (where people would have their DNA matched to that from a crime scene when it shouldn’t have.)

As the title of her talk suggests, she used the concrete example to make the point that we need to create incentives for companies to test and improve their AIs. In particular she suggested that:

  1. Companies should be encouraged/regulated to invest some of the profit they make from the efficiencies from AI in improving the AI.
  2. That a better way to deal with the problems of AIs than weaving humans into the loop would be to set up independent human testers who test the AI and have a mechanism of redress. She pointed out how humans in the loop can get lazy, can be incentivized to agree with the AI and so on.
  3. We need regulation! No other approach will motivate companies to improve their AIs.

We had an interesting conversation around the question of how one could test point 2. Can we come up with a way of testing which approach is better?

She shared a link to a collection of links to most of the relevant papers and information: Northwestern Panel, March 10 2022.

The Universal Paperclips Game

Just finished playing the Universal Paperclips game which was surprisingly fun. It took me about 3.5 hours to get to sentience. The idea of the game is that you are an AI running a paperclip company and you make decisions and investments. The game was inspired by the philosopher Nick Bostrom‘s paperclip maximizer thought experiment which shows the risk that some harmless AI that controls the making of paperclips might evolve into an AGI (Artificial General Intelligence) and pose a risk to us. It might even convert all the resources of the universe into paperclips. The original thought experiment is in Bostrom’s paper Ethical Issues in Advanced Artificial Intelligence to illustrate the point that “Artificial intellects need not have humanlike motives.”

Human are rarely willing slaves, but there is nothing implausible about the idea of a superintelligence having as its supergoal to serve humanity or some particular human, with no desire whatsoever to revolt or to “liberate” itself. It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal. For better or worse, artificial intellects need not share our human motivational tendencies.

The game is rather addictive despite having a simple interface where all you can do is click on buttons making decisions. The decisions you get to make change over time and there are different panels that open up for exploration.

I learned about the game from an interesting blog entry by David Rosenthal on how It Isn’t About The Technology which is a response to enthusiasm about Web 3.0 and decentralized technologies (blockchain) and how they might save us, to which Rosenthal responds that it is isn’t about the technology.

One of the more interesting ideas that Rosenthal mentions is from Charles Stross’s keynote for the 34th Chaos Communications Congress to the effect that businesses are “slow AIs”. Corporations are machines that, like the paperclip maximizer, are self-optimizing and evolve until they are dangerous – something we are seeing with Google and Facebook.

Ottawa’s use of our location data raises big surveillance and privacy concerns

In order to track the pandemic, the Public Health Agency of Canada has been using location data without explicit and informed consent. Transparency is key to building and maintaining trust.

The Conversation has just published an article on  Ottawa’s use of our location data raises big surveillance and privacy concerns. This was written with a number of colleagues who were part of a research retreat (Dagstuhl) on Mobility Data Analysis: from Technical to Ethical.

We are at a moment when ethical principles are really not enough and we need to start talking about best practices in order to develop a culture of ethical use of data.

Lost Gustav Klimt Paintings Destroyed in Fire Digitally Restored (by AI)

Black and White and AI Coloured versions of Philosophy
Philosophy by Klimt

Google Arts & Culture launched a hub for all things Gustav Klimt today, which include digital restorations of three lost paintings.

ARTnews, among other places reports that Lost Gustav Klimt Paintings Destroyed in Fire Digitally RestoredThe three faculties (Medicine, Philosophy, and Jurisprudence) painted for the University of Vienna were destroyed in a fire leaving only black and white photographs. Now Google has helped recreate what the three paintings might have looked like using AI as part of a Google Arts and Culture site on Klimt. You can read about the history of the three faculties here.

Whether in black and white, or in colour, the painting of Philosophy (above) is stunning. The original in colour would have been stunning, especially as it was 170 by 118 inches. Philosophy is represented by the Sphinx-like figure merging with the universe. To one side is a stream of people from the young to the old who hold their heads in confusion. At the bottom is a woman, comparable to the woman in the painting of Medicine, who might be an inspired philosopher looking through us.