CSDH/SCHN Congress 2025: Reframing Togetherness

These last few days I have been at the CSDH/SCHN conference that is part of the Congress 2025. With colleagues and graduate research assistants I was part of a number of papers and panels. See CSDH/SCHN Congress 2025: Reframing TogethernessThe programme is here. Some of the papers I was involved in included:

  • Exploring the Deceptive Patterns of Chinook: Visualization and Storytelling Approaches Critical Software Study – Roya Sharifi; Ralph Padilla; Zahra Farhangfar; Yasmeen Abu-Laban; Eleyan Sawafta; and Geoffrey Rockwell
  • Building a Consortium: An Approach to Sustainability – Geoffrey Martin Rockwell; Michael Sinatra; Susan Brown; John Bradley; Ayushi Khemka; and Andrew MacDonald
  • Integrating Large Language Models with Spyral Notebooks – Sean Lis and Geoffrey Rockwell
  • AI-Driven Textual Analysis to Decode Canadian Immigration Social Media Discourse – Augustine Farinola & Geoffrey Martin Rockwell
  • The List in Text Analysis – Geoffrey Martin Rockwell; Ryan Chartier; and Andrew MacDonald

I was also part of a panel on Generative AI, LLMs, and Knowledge Structures organized by Ray Siemens. My paper was on Forging Interpretations with Generative AI. Here is the abstract:

Using large language models we can now generate fairly sophisticated interpretations of documents using natural language prompts. We can ask for classifications, summaries, visualizations, or specific content to be extracted. In short we can automate content analysis of the sort we used to count as research. As we play with the forging of interpretations at scale we need to consider the ethics of using generative AI in our research. We need to ask how we can use these models with respect for sources, care for transparency, and attention to positionality.

Moral Resposibility

On Thursday (the 29th of May) I gave a talk on Moral Responsibility and Artificial Intelligence at the Canadian AI 2025 conference in Calgary, Alberta.

I discussed what moral responsibility might be in the context of AI and argued for an ethic of care (and repair) approach to building relationships of responsibility and responsibility practices.

There was a Responsible AI track in the conference that had some great talks by Gideon Christian (U of Calgary in Law) and Julita Vassileva (U Saskatchewan.)

News Media Publishers Run Coordinated Ad Campaign Urging Washington to Protect Content From Big Tech and AI

Today, hundreds of news publishers launched the “Support Responsible AI” ad campaign, which calls on Washington to make Big Tech pay for the content it takes to run its AI products.

I came across one these ads about AI Theft from the News Media Alliance and followed it to this site about how, News Media Publishers Run Coordinated Ad Campaign Urging Washington to Protect Content From Big Tech and AI. They have three asks:

  • Require Big Tech and AI companies to fairly compensate content creators.
  • Mandate transparency, sourcing, and attribution in AI-generated content
  • Prevent monopolies from engaging in coercive and anti-competitive practices.

Gary Marcus has a substack column on Sam Altman’s attitude problem that talks about Altman’s lack of a response when confronted with an example of what seems like IP theft. I think the positions are hardening as groups begin to use charged language like “theft” for what AI companies are doing.

Responsible AI Lecture in Delhi

A couple of days ago I gave an Institute Lecture on What is Responsible About Responsible AI at the Indian Institute of Technology Delhi, India. In it I looked at how AI ethics governance is discussed in Canada under the rubric of Responsible AI and AI Safety. I talked about the emergence of AI Safety Institutes like CAISI (Canadian AI Safety Institute.) Just when it seemed that “safety” was the emergent international approach to ethics governance, Vice President JD Lance’s speech at the Paris Summit made it clear that the Trump administration in not interested,

The AI future is not going to be won by hand-wringing about safety. (Vance)

IIT Delhi DH 2025 Winter School

Arjun Ghosh invited me to contribute to the DH 2025 Winter School at IIT Delhi. I’m teaching a 6-day workshop on Voyant as part of this Winter School. You can see my outline here (note that I am still finishing the individual pages.) Some thoughts:

  • There is a real interest in DH in India. Arjun had over 500 applications for 25 places. I doubt we would have that many in Canada.
  • As can be expected, there is a lot of interest handling Indian languages like Hindi or Tamil.
  • There are a number of social scientists at the School. The humanities and social sciences may not be as clearly distinguished here.
  • There was an interesting session on digital libraries given by a data librarian at UPenn.

Do we really know how to build AGI?

Sam Altman in a blog post titled Reflections looks back at what OpenAI has done and then predicts that they know how to build AGI,

We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.

It is worth noting that the definition of AGI (Artificial General Intelligence) is sufficiently vague that meeting this target could become a matter of semantics. None the less, here are some definitions of AGI from OpenAI or others about OpenAI,

  • OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.” – Note the “economically valuable work”. I wonder if philosophizing or making art is valuable? Is intelligence being limited here to economics?
  • “AI systems that are generally smarter than humans” – This is somewhat circular as brings us back to defining “smartness”, another work for “intelligence”.
  • “any system that can outperform humans at most tasks” – This could be timed to the quote above and the idea of AI agents that can work for companies outperforming humans. It seems to me we are nowhere near this if you include physical tasks.
  • an AI system that can generate at least $100 billion in profits” – This is the definition used by OpenAI and Microsoft to help identify when OpenAI doesn’t have to share technology with Microsoft any more.

Claudette – An Automated Detector of Potentially Unfair Clauses in Online Terms of Service

Randy Goebel gave a great presentation on the use of AI in Judicial Decision Making on Friday to my AI Ethics course. He showed us an example tool called Claudette which can be used to identify potentially unfair clauses in a Terms and Conditions document. You can try it here at the dedicated web site here.

Why is this useful? It provides a form of summary of a document none of us read that could help us catch problematic clauses. It could help us be more careful users of applications.

Can A.I. Be Blamed for a Teen’s Suicide?

The New York Times has a story about youth who committed suicide after extended interactions with a character on Character.ai. The story, Can A.I. Be Blamed for a Teen’s Suicide? describes how Sewell Setzer III has long discussions with a character called Daenerys Targaryen from the Game of Thrones series. He became isolated and got attached to Daenerys. He eventually shot himself and now his mother is suing Character.ai.

Here is an example of what he wrote in his journal,

I like staying in my room so much because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany and much more in love with her, and just happier.

The suit claims that Character.ai’s product was untested, dangerous and defective. It remains to be seen if these types of suits will succeed. In the meantime we need to be careful with these social AIs.

The 18th Annual Hurtig Lecture 2024: Canada’s Role in Shaping our AI Future

The video for the 2024 Hurtig Lecture is up. The speaker was Dr. Elissa Strome, Executive Director of the Pan-Canadian AI Strategy. She gave an excellent overview of the AI Strategy here in Canada and ended by discussing some of the challenges.

The Hurtig Lecture was organized by my colleague Dr. Yasmeen Abu-Laban. I got to moderate the panel discussion and Q & A after the lecture.

Dario Amodei: Machines of Loving Grace

Dario Amodei of Anthropic fame has published a long essay on AI titled Machines of Loving Grace: How AI Could Transform the World for Better. In the essay he talks about how he doesn’t like the term AGI and prefers to instead talk about “powerful AI” and he provides a set of characteristics he considers important, including the ability to work on issues in sustained fashion over time.

Amodei also doesn’t worry much about the Singularity as he believes powerful AI will still have to deal with real world problems when designing more powerful AI like building physical systems. I tend to agree.

The point of the essay is, however, to focus on five categories of positive applications of AI that are possible:

  1. Biology and physical health
  2. Neuroscience and mental health
  3. Economic development and poverty
  4. Peace and governance
  5. Work and meaning

The essay is long, so I won’t go into detail. What is important is that he articulates a set of positive goals that AI could help with in these categories. He calls his vision both radical and obvious. In a sense he is right – we have stopped trying to imagine a better world through technology, whether out of cynicism or attention only to details.

Throughout writing this essay I noticed an interesting tension. In one sense the vision laid out here is extremely radical: it is not what almost anyone expects to happen in the next decade, and will likely strike many as an absurd fantasy. Some may not even consider it desirable; it embodies values and political choices that not everyone will agree with. But at the same time there is something blindingly obvious—something overdetermined—about it, as if many different attempts to envision a good world inevitably lead roughly here.