Zampolli Prize Awarded to Voyant Tools

Spyral Notebook Detail (showing code cell and stacked graphs)

Yesterday I gave the triennial Zampolli Prize lecture that honoured Voyant. The lecture is given at the annual ADHO Digital Humanities conference which this year is being hosted by the University of Tokyo. The award notice is here Zampolli Prize Awarded to Voyant Tools. Some of the things I touched on in the talk included:

  • The genius of of Stéfan Sinclair who passed in August 2020. Voyant was his vision from the time of his dissertation for which he develop HyperPo.
  • The global team of people involved in Voyant including many graduate research assistants at the U of Alberta. See the About page of Voyant.
  • How Voyant built on ideas Stéfan and I developed in Hermeneutica about collaborative research as opposed to the inherited solitary paradigm.
  • How we have now developed an extension to Voyant called Spyral. Spyral is a notebook programming environment built on JavaScript. It allows you to document your Voyant explorations, save parameters for corpora and tools, preprocess texts, postprocess results, and create new visualizations. It is, in short, a full data analysis and visualization environment built into Voyant so you can easily call up and explore results in Voyant’s already rich tool set.
  • In the image above you can see a Spyral code cell that outputs two stacked graphs where the same pattern of words is graphed over two different, but synchronized, corpora. You can thus compare the use of the pattern over time between the two datasets.
  • Replication as a practice for recovering an understanding of innovative technologies now taken for granted like tokenization or the KWIC. I talked about how Stéfan and I have been replicating important text processing technologies as a way of understanding the history of computing and the digital humanities. Spyral was the environment we developed for documenting our replications.
  • I then backed up and talked about the epistemological questions about knowledge and knowledge things in the digital age that grew out of and then inspired our experiments in replication. These go back to attempts to think-through tools as knowledge things that bear knowledge in ways that discourse doesn’t. In this context I talked about the DIKW pyramid (data, information, knowledge, wisdom) that captures current views about the relationships between data and knowledge.
  • Finally I called for help to maintain and extend Voyant/Spyral. I announced the creation of a consortium to bring us together to sustain Voyant.

It was an honour to be able to give the Zampolli lecture on behalf of all the people who have made Voyant such a useful tool.

Axon Pauses Plans for Taser Drone as Ethics Board Members Resign – The New York Times

After Axon announced plans for a Taser-equipped drone that it said could prevent mass shootings, nine members of the company’s ethics board stepped down.

Ethics boards can make a difference as a story from The New York Times shows, Axon Pauses Plans for Taser Drone as Ethics Board Members ResignThe problem is that board members had to resign.

The background is that Axon, after the school shootings, announced an early-stage concept for a TASER drone. The idea was to combine two emerging technologies, drones and non-lethal energy weapons. The proposal said they wanted a discussion and laws. “We cannot introduce anything like non-lethal drones into schools without rigorous debate and laws that govern their use.” The proposal went on to discuss CEO Rick Smith’s 3 Laws of Non-Lethal Robotics: A New Approach to Reduce Shootings. The 2021 video of Smith talking about his 3 laws spells out a scenario where a remote (police?) operator could guide a prepositioned drone in a school to incapacitate a threat. The 3 laws are:

  1. Non-lethal drones should be used to save lives, not take them.
  2. Humans must own use-of-force decisions and take moral and legal responsibility.
  3. Agencies must provide rigorous oversight and transparency to ensure acceptable use.

The ethics board, which had reviewed a limited internal proposal and rejected it, then resigned when Axon went ahead with the proposal and issued a statement on Twitter on June 2nd, 2022.

Rick Smith, CEO of Axon soon issued a statement pausing work on the idea. He described the early announcement as intended to start a conversation,

Our announcement was intended to initiate a conversation on this as a potential solution, and it did lead to considerable public discussion that has provided us with a deeper appreciation of the complex and important considerations relating to this matter. I acknowledge that our passion for finding new solutions to stop mass shootings led us to move quickly to share our ideas.

This resignation illustrates a number of points. First, we see Axon struggling with ethics in the face of opportunity. Second, we see an example of an ethics board working, even if it led to resignations. These deliberations are usually hidden. Third, we see differences on the issue of autonomous weapons. Axon wants to get social license for a close alternative to AI-driven drones. They are trying to find an acceptable window for their business. Finally, it is interesting how Smith echoes Asimov’s 3 Laws of Robotics as he tries to reassure us that good system design would mitigate the dangers of experimenting with weaponized drones in our schools.

Lessons from the Robodebt debacle

How to avoid algorithmic decision-making mistakes: lessons from the Robodebt debacle

The University of Queensland has a research alliance looking at Trust, Ethics and Governance and one of the teams has recently published an interesting summary of How to avoid algorithmic decision-making mistakes: lessons from the Robodebt debacleThis is based on an open paper Algorithmic decision-making and system destructiveness: A case of automatic debt recovery. The web summary article is a good discussion of the Australian 2016 robodebt scandal where an unsupervised algorithm issued nasty debt collection letters to a large number of welfare recipients without adequate testing, accountability, or oversight. It is a classic case of a simplistic and poorly tested algorithm being rushed into service and having dramatic consequences (470,000 incorrectly issued debt notices). There is, as the article points out, also a political angle.

UQ’s experts argue that the government decision-makers responsible for rolling out the program exhibited tunnel vision. They framed welfare non-compliance as a major societal problem and saw welfare recipients as suspects of intentional fraud. Balancing the budget by cracking down on the alleged fraud had been one of the ruling party’s central campaign promises.

As such, there was a strong focus on meeting financial targets with little concern over the main mission of the welfare agency and potentially detrimental effects on individual citizens. This tunnel vision resulted in politicians’ and Centrelink management’s inability or unwillingness to critically evaluate and foresee the program’s impact, despite warnings. And there were warnings.

What I find even more disturbing is a point they make about how the system shifted the responsibility for establishing the existence of the debt from the government agency to the individual. The system essentially made speculative determinations and then issued bills. It was up to the individual to figure out whether or not they had really been overpaid or there was a miscalculation. Imagine if the police used predictive algorithms to fine people for possible speeding infractions who then had to prove they were innocent or pay the fine.

One can see the attractiveness of such a “fine first then ask” approach. It reduces government costs by shifting the onerous task of establishing the facts to the citizen. There is a good chance that many who were incorrectly billed will pay anyway as they are intimidated and don’t have the resources to contest the fine.

It should be noted that this was not the case of an AI gone bad. It was, from what I have read, a fairly simple system.

Predatory community

Projects that seek to create new communities of marginalized people to draw them in to risky speculative markets rife with scams and fraud are demonstrating

Through a Washington Post article I discovered Molly White who has been documenting the alt-right and now the crypto community. She has a blog at Molly White and a site that documents the problems of crypto at Web3 is going just great. There is, of course, a connection between the alt-right and crypto broculture, something that she talks about in posts like Predatory community which is about crypto promotions try to build community and are now playing the inclusive card – aiming at marginalized communities and trying to convince them that now they can get in on the action and build community. She calls this “predatory community.”

Groups that operate under the guise of inclusion, regardless of their intentions, are serving the greater goal of crypto that keeps the whole thing afloat: finding ever more fools to buy in so that the early investors can take their profits. And it is those latecomers who are left holding the bag in the end.

With projects that seek to provide services and opportunities to members of marginalized groups who have previously not had access, but on bad terms that ultimately disadvantaged them, we see predatory inclusion.22 With projects that seek to create new communities of marginalized people to draw them in to risky speculative markets rife with scams and fraud, we are now seeing predatory community.

Michael GRODEN Obituary

I just found out that Michael GRODEN (1947 – 2021) passed away a year ago. Groden was a member of CSDH/SCHN when it was called COCH/COSH and gave papers at our conferences. He developed an hypertext version of Ulysses that was never published because of rights issues. He did, however, talk about it. He did, however, publish about his ideas about hypertext editions of complex works like Ulysses. See his online CV for more.

Wordle – A daily word game

Wordle Logo

Guess the hidden word in 6 tries. A new puzzle is available each day.

Well … I finally played Wordle – A daily word game after reading about it. It was a nice clean puzzle that got me thinking about vowels. I like the idea that there is one a day as I was immediately tempted to try another and another … Instead the one-a-day gives it a detachment. I can see why the New York Times would buy it, it is the sort of game that would bring in potential subscribers.

We Might Be in a Simulation. How Much Should That Worry Us?

We may not be able to prove that we are in a simulation, but at the very least, it will be a possibility that we can’t rule out. But it could be more than that. Chalmers argues that if we’re in a simulation, there’d be no reason to think it’s the only simulation; in the same way that lots of different computers today are running Microsoft Excel, lots of different machines might be running an instance of the simulation. If that was the case, simulated worlds would vastly outnumber non-sim worlds — meaning that, just as a matter of statistics, it would be not just possible that our world is one of the many simulations but likely.

The New York Times has a fun opinion piece to the effect that We Might Be in a Simulation. How Much Should That Worry Us? This follows on Nick Bostrom’s essay Are you living in a computer simulation? that argues that either advanced posthuman civilizations don’t run lots of simulations of the past or we are in one.

The opinion is partly a review of a recent book by David Chalmers, Reality+: Virtual Worlds and the Problems of Philosophy (which I haven’t read.) Chalmers thinks there is a good chance we are in a simulation, and if so, there are probably others.

I am also reminded of Hervé Le Tellier’s novel The Anomaly where a plane full of people pops out of the clouds for the second time creating an anomaly where there are two instances of each person on the plane. This is taken as a glitch that may indicate that we are in a simulation raising all sorts of questions about whether there are actually anomalies that might be indications that this really is a simulation or a complicated idea in God’s mind (think Bishop Berkeley’s idealism.)

For me the challenge is the complexity of the world I experience. I can’t help thinking that a posthuman society modelling things really doesn’t need such a rich world as I experience. For that matter, would there really be enough computing to do it? Is this simulation fantasy just a virtual reality version of the singularity hypothesis prompted by the new VR technologies coming on stream?

The Future of Digital Assistants Is Queer

AI assistants continue to reinforce sexist stereotypes, but queering these devices could help reimagine their relationship to gender altogether.

Wired has a nice article on how the The Future of Digital Assistants Is Queer. The article looks at the gendering of virtual assistants like Siri and how it is not enough to just offer male voices, but we need to queer the voices. It mentions the ethical issue of how voice conveys information like whether the VA is a bot or not.

Why people believe Covid conspiracy theories: could folklore hold the answer?

Using Danish witchcraft folklore as a model, the researchers from UCLA and Berkeley analysed thousands of social media posts with an artificial intelligence tool and extracted the key people, things and relationships.

The Guardian has a nice story on Why people believe Covid conspiracy theories: could folklore hold the answer? This reports on research using folklore theory and artificial intelligence to understand conspiracies.

The story maps how Bill Gates connects the coronavirus with 5G for conspiracy fans. They use folklore theory to understand the way conspiracies work.

Folklore isn’t just a model for the AI. Tangherlini, whose specialism is Danish folklore, is interested in how conspiratorial witchcraft folklore took hold in the 16th and 17th centuries and what lessons it has for today.

Whereas in the past, witches were accused of using herbs to create potions that caused miscarriages, today we see stories that Gates is using coronavirus vaccinations to sterilise people. …

The research also hints at a way of breaking through conspiracy theory logic, offering a glimmer of hope as increasing numbers of people get drawn in.

The story then addresses the question of what difference the research might make. What good would a folklore map of a conspiracy theory do? The challenge of research is the more information clearly doesn’t work in a world of information overload.

The paper the story is based on is Conspiracy in the time of corona: automatic detection of emerging Covid-19 conspiracy theories in social media and the news, by Shadi Shahsavari, Pavan Holur, Tianyi Wang , Timothy R Tangherlini and Vwani Roychowdhury.