DH 2024: Visualization Ethics and Text Analysis Infrastructure

This week I’m at DH 2024 at George Mason in Washington DC. I presented as part of two sessions. 

On Wednesday I presented a short paper with Lauren Klein on work a group of us are doing on Visualization Ethics: A Case Study Approach. We met at a Dagstuhl on Visualization and the Humanities: Towards a Shared Research Agenda. We developed case studies for teaching visualization ethics and that’s what our short presentation was about. The link above is to a Google Drive with drafts of our cases.

Thursday morning I was part of a panel on Text Analysis Tools and Infrastructure in 2024 and Beyond. (The link, again, takes you to a web page where you can download the short papers we wrote for this “flipped” session.) This panel brought together a bunch of text analysis projects like WordCruncher and Lexos to talk about how we can maintain and evolve our infrastructure.

How to Write Poetry Using Copilot

How to Write Poetry Using Copilot is a short guide on how to use Microsoft Copilot to write different genres of poetry. Try it out, it is rather interesting. Here are some of the reasons they give for asking Copilot to write poetry:

  • Create a thoughtful surprise. Why not surprise a loved one with a meaningful poem that will make their day?
  • Add poems to cards. If you’re creating a birthday, anniversary, or Valentine’s Day card from scratch, Copilot can help you write a unique poem for the occasion.
  • Create eye-catching emails. If you’re trying to add humor to a company newsletter or a marketing email that your customers will read, you can have Copilot write a fun poem to spice up your emails.
  • See poetry examples. If you’re looking for examples of different types of poetry, like sonnets or haikus, you can use Copilot to give you an example of one of these poems.

 

Home | Constellate

The new text and data analysis service from JSTOR and Portico.

Thanks to John I have been exploring Constellate. This comes from ITHAKA that has developed JSTOR. Constellate lets you build a dataset from their collections and then visualize the data (see image above.) They also have a Jupyter lab where you can then run notebooks on your data.

They are now experimenting with AI tools.

In Ukraine War, A.I. Begins Ushering In an Age of Killer Robots

Driven by the war with Russia, many Ukrainian companies are working on a major leap forward in the weaponization of consumer technology.

The New York Times has an important story on how, In Ukraine War, A.I. Begins Ushering In an Age of Killer Robots. In short, the existential threat of the overwhelming Russian attack is creating a situation where Ukraine is developing a home-grown autonomous weapons industry that repurposes consumer technologies. Not only are all sorts of countries testing AI powered weapons in Ukraine, the Ukrainians are weaponizing cheap technologies and, in the process, removing a lot of the guardrails.

The pressure to outthink the enemy, along with huge flows of investment, donations and government contracts, has turned Ukraine into a Silicon Valley for autonomous drones and other weaponry.

There isn’t necessarily any “human in the loop” in the cheap systems they are developing. One wonders how the development of this industry will affect other conflicts. Could we see a proliferation of terrorist drone attacks put together following plans circulating on the internet?

ChatGPT is Bullshit.

The Hallucination Lie

Ignacio de Gregorio has a nice Medium essay about why ChatGPT is bullshit. The essay is essentially a short and accessible version of an academic article by Hicks, M. T., et al. (2024), ChatGPT is bullshit. They make the case that people make decisions based on their understanding about what LLMs are doing and that “hallucination” is the wrong word because ChatGPT is not misperceiving the way a human would. Instead they need to understand that LLMs are designed with no regard for the truth and are therefore bullshitting.

Because these programs cannot themselves be concerned with truth, and because they are designed to produce
text that looks truth-apt without any actual concern for truth,
it seems appropriate to call their outputs bullshit. (p. 1)

Given this process, it’s not surprising that LLMs have a
problem with the truth. Their goal is to provide a normal-
seeming response to a prompt, not to convey information
that is helpful to their interlocutor. (p. 2)

At the end the authors make the case that if we adopt Dennett’s intentional stance then we would do well to attribute to ChatGPT the intentions of a hard bullshitter as that would allow us to better diagnose what it was doing. There is also a discussion of the intentions of the developers. You could say that they made available a tool that bullshitted without care for the truth.

Are we, as a society, at risk of being led by these LLMs and their constant use, to confuse the simulacra “truthiness” for true knowledge?

 

Why the pope has the ears of G7 leaders on the ethics of AI

Pope Francis is leaning on thinking of Paolo Benanti, a friar adept at explaining how technology can change world

The Guardian has some good analysis on Why the pope has the ears of G7 leaders on the ethics of AI | Artificial intelligence (AI). The PM of Italy, Meloni, invited the pope to address the G7 leaders on the issue of AI. I blogged about this here. It is worth pointing out that this is not the first time the Vatican has intervened on the issue of AI ethics. Here is a short timeline:

  • In 2020 a bunch of Catholic organizations and industry heavyweights sign the Rome Call (Call for AI Ethics). The Archbishop of Canterbury just signed.
  • In 2021 they create the RenAIssance Foundation building on the Rome Call. It’s scientific director is Paolo Benati, a charismatic Franciscan monk, professor, and writer on religion and technology. He is apparently advising both Meloni and the pope and he coined the term “algo-ethics”. Most of his publications are in Italian, but there is an interview in English. He is also apparently on the OECD’s expert panel now.
  • 2022 Benati publishes Human in the Loop: Decisioni umane e intelligenze artificiali (Human in the Loop: Human Decisions and Artificial Intelligences) which is about the importance of ethics to AI and the human in ethics.
  • 2024 Meloni invites the Pope to address the G7 leaders gathered in Italy on AI.

UN launches recommendations for urgent action to curb harm from spread of mis and disinformation and hate speech Global Principles for Information Integrity address risks posed by advances in AI

United Nations, New York, 24 June 2024 – The world must respond to the harm caused by the spread of online hate and lies while robustly upholding human rights, United Nations Secretary- General António Guterres said today at the launch of the United Nations Global Principles for Information Integrity.

The UN has issued a press release announcing that the UN launches recommendations for urgent action to curb harm from spread of mis and disinformation and hate speech Global Principles for Information Integrity address risks posed by advances in AI. This press release marks the launch of the United Nations Global Principles for Information Integrity.

The recommendations in the press release include:

Tech companies should ensure safety and privacy by design in all products, alongside consistent application of policies and resources across countries and languages, with particular attention to the needs of those groups often targeted online. They should elevate crisis response and take measures to support information integrity around elections.

Tech companies should scope business models that do not rely on programmatic advertising and do not prioritize engagement above human rights, privacy, and safety, allowing users greater choice and control over their online experience and personal data.

Advertisers should demand transparency in digital advertising processes from the tech sector to help ensure that ad budgets do not inadvertently fund disinformation or hate or undermine human rights.

Tech companies and AI developers should ensure meaningful transparency and allow researchers and academics access to data while respecting user privacy, commission publicly available independent audits and co-develop industry accountability frameworks.

 

Decker – A HyperCard for the Web

I’m at the CSDH-SCHN conference which is in Montreal. We have relocated to U de Montreal from McGill where Congress is taking place. Jason Boyd gave a paper about the Centre for Digital Humanities at TMU that he directs. He mentioned an authoring environment called Decker that recreates a deck/card based environment similar to what HyperCard was like.

Decker can be used to create visual novels, interactive texts, hypertexts, educational apps, and small games. It has a programming language related to Lua. It has simple graphics tools.

Decker looks really neat and seems to work within a browser as a HTML page. This mean that you can Save As a page and get the development environment locally. All the code and data in a page that can be forked or passed around.

As a lover of HyperCard I am thrilled to see something that replicates its spirit!

Rebind | Read Like Never Before With Experts, AI, & Original Content

Experience the next chapter in reading with Rebind: the first AI-reading platform. Embark on expert-guided journeys through timeless classics.

From a NYTimes story I learned about John Kaag’s new initiative Rebind | Read Like Never Before With Experts, AI, & Original Content. The philosophers Kaag and Clancy Martin have teamed up with an investor to start a company that create AI enhanced “rebindings” of classics. They work with out of copyright book and then pay someone to interpret or comment on the book. The commentary is then used to train an AI with whom you can dialogue as you go through the book. The end result (which I am on the waitlist to try) will be a reading experience enhanced by interpretative videos and chances to interact. It answers Plato’s old critique of text that you can ask questions of it. Now you can.

This reminds me of an experiment by Schwitzgebel, Strasser, and Crosby who created a Daniel Dennett chatbot. Here you can see SChwitzgebel’s reflections on the project.

This project raised ethical issues like whether it was ethical to simulate a living person. In this case they asked for Dennett’s permission and didn’t give people direct access to the chatbot. With the announcements about Apple Intelligence it looks like Apple may provide an AI that is part of the system that will have access to your combined files so as to help with search and to help you talk with yourself. Internal dialogue, of course, is the paradigmatic manifestation of consciousness. Could one import one or two thinkers to have a multi-party dialogue about ones thinking over time … “What do you think Plato; should I write another paper about ethics and technology?”

Surgeon General: Social Media Platforms Need a Health Warning

It’s time for decisive action to protect our young people.

The New York Times is carrying an opinion piece by Vivek H. Murthy, the Surgeon General of the USA arguing that  Social Media Platforms Need a Health Warning. He argues that we have a youth mental health crisis and “social media has emerged as an important contributor.” For this reason he wants social media platforms to carry a warning label similar to cigarettes, something that would take congressional action.

He has more advice in a Social Media and Youth Mental Health advisory including protecting youth from harassment and problematic content. The rhetoric is to give parents support:

There is no seatbelt for parents to click, no helmet to snap in place, no assurance that trusted experts have investigated and ensured that these platforms are safe for our kids. There are just parents and their children, trying to figure it out on their own, pitted against some of the best product engineers and most well-resourced companies in the world.

Social media has gone from a tool of democracy (remember Tahir Square?) to a info-plague in a little over ten years. Just as it is easy to seek salvation in technology, and the platforms encourage such hype, it is also easy to blame it. The Surgeon General’s sort of advice will get broad support, but will anything happen? How long will it take regulation and civil society to box the platforms into civil business? The Surgeon General calls how well we protect our children a “moral test.” Indeed.