The Next Generation Frontiers Symposium

The Next Generation Frontiers Symposium is in full swing in Banff! From sustainability to culture, yesterday’s sessions showcased the breadth of ideas shaping the future of AI. In a panel moderated by Hsiao-Ting Tseng, researchers Anfanny Chen, Shih-Fang Chen and Hsien-Tien Lin shared how AI can drive sustainable practices  — from smarter agriculture and resource management to greener supply chains and reduced carbon emissions. Later, Annie En-Shuin Lee, Dane Malenfant, Chi-Jui Hu, and Yun-Pu Tu led a fascinating discussion, moderated by Geoffrey Rockwell, on Indigenous AI and Culture, exploring the relationship between AI, cultural diversity and Indigenous knowledge. The day highlighted how meaningful interdisciplinary exchange can spark fresh perspectives and lead to new frontiers in research. (From here)

I’ve just come back from the Next Generation Frontiers Symposium which was organized by CIFAR, Taiwan’s National Science and Technology Council, (NSTC), and the Research Institute for Democracy, Society and Emerging Technology (DSET). This brought researchers from Taiwan and Canada to talk about Responsible AI, Sovereign AI, AI and Sustainability, and Indigenous AI and Culture. I moderated the Indigenous AI and Culture theme which looked at how AI might impact indigenous communities in both Taiwan and Canada. Some of the reflections include:

  • Indigenous community are often poorly represented in LLMs. We need ways for communities to be able to personalize models for their community with knowledge from their community.
  • The mass scraping of the Internet with little regard for ownership or consent of content creators is more of the extractive and colonizing behaviour that leads many indigenous communities to distrust settler nations.
  • There are knowledge practices and types of knowledge like gendered knowledge, age-specific knowledge, and location-based knowledge that simply cannot be datafied and modelled if they are to maintain their character.
  • Datafication and modelling work with measurable evidence. Anything that can’t be captured, sampled, and measured can’t then be datafied and thus can’t be modelled. Further, there is the danger that such evidence and knowledge will be deligitimized as unmeasurable and eventually excluded and fiction or mysticism. We could end up believing that only what we could datafy and model is knowledge.
  • Western espistemological practices of openness, science and replicable results should not be imposed on communities with different epistemological practices. AI is the product of Western epistemology and thus may never be compatible with indigenous wisdom.
  • We need to respect the desire of some communities to be forgotten and thus not scraped at all for measurable knowledge. Some may choose opacity.
  • Knowledge and its materials taken from communities should be returned. Communities should be supported to develop their own ways of preserving their knowledge including ways of datafying and modelling their knowledge, if they so wish.

Margaret Tu, one of the participants in the session, wrote a moving essay about the need for cultural safety for indigenous communities in the face of disaster in Taiwan. See Taiwan’s Barrier Lake Disaster Intersects With Its Troubled Indigenous Policy. It ends with this wisdom,

Disasters demand speed, but recovery demands reflection. For the Fata’an, healing will not come from relocation alone; it must be rooted in both land and culture.

How To Festival: How to think like an AI Ethicist

On Saturday I gave an online talk on “How to think like AI Ethicist” that was part of a How To Festival. I talked about thinking about responsibility and the issue of “responsibility gaps”. I talked about some key risks like hallucinations, bias, deep fakes, and companion AIs. I also mentioned that we need to celebrate the effective uses of AI and think not just about hazards, but also about AI for good.

Artificial intelligence (AI) is everywhere. We all need to assess what to use and how to use the new tools. In this talk Geoffrey Rockwell will discuss some of the safety issues raised by the new generative AI tools. He will suggest some ways you can think through AI.Geoffrey Rockwell is a Professor for Philosophy and Digital Humanities at the University of Alberta. He is also a Canada CIFAR AI Chair working on responsible AI.EPL’s annual How To Festival is a chance to learn something new from someone who already knows how to do it. A variety of experts from professionals to enthusiasts will share their skills with you.This is an online program. To receive a link and passcode to the online class, please register with your name and email address and instructions will be sent to you within 24 hours of the session.Zoom, a third-party app, will be used for this virtual session. By joining, you acknowledge that EPL does not take responsibility for Zoom’s privacy policies and practice.

Source: How To Festival: How to think like an AI Ethicist

Apertus | Swiss AI

Switzerland has developed an open set of models, Apertus | Swiss AI, that is trained on a documented training set, “developed with due consideration to Swiss data protection laws, Swiss copyright laws, and the transparency obligations under the EU AI Act.”

EPFL, ETH Zurich, and the Swiss National Supercomputing Centre (CSCS) has released Apertus, Switzerland’s first large-scale open, multilingual language model — a milestone in generative AI for transparency and diversity. Trained on 15 trillion tokens across more than 1,000 languages – 40% of the data is non-English – Apertus includes many languages that have so far been underrepresented in LLMs, such as Swiss German, Romansh, and many others. Apertus serves as a building block for developers and organizations for future applications such as chatbots, translation systems, or educational tools.

This project should interest us in Canada as we are talking Sovereign AI. Should Canada develop its own open models? What advantages would that provide? Here are some I can think of:

  • It could provide an open and well maintained set of LLMs that researchers and companies could build on or use without fear that access could be changed/pulled or data logged about usage.
  • It could be designed to be privacy protecting and to encourage adherence to relevant and changing Canadian laws and best practices.
  • It could be trained on an open and well documented bilingual data that would reflect Canadian history, culture, and values.
  • It could be iteratively retrained as issues like bias is demonstrated to be tied to part of the training data. It could also be retrained for new capacities as needed by Canadians.
  • It could include ethically accessed Indigenous training sets developed in consultation with indigenous communities. Further, it could be made available to indigenous scholars and communities with support for the development of culturally appropriate AI tools.
  • We could archive code, data, weights, documentation in such a way that Canadians could check, test, and reproduce the work.

I wonder if we could partner with Switzerland to build on their model or other countries with similar values to produce a joint model?

Personal Superintelligence

Explore Meta’s vision of personal superintelligence, where AI empowers individuals to achieve their goals, create, connect, and lead fulfilling lives. Insights from Mark Zuckerberg on the future of AI and human empowerment.

Mark Zuckerberg has just posted his vision of superintelligence: Personal Superintelligence. He starts by reiterating what a lot of people are saying; namely that AGI (Artificial General Intelligence) or superintelligence is coming soon,

Over the last few months we have begun to see glimpses of our AI systems improving themselves. The improvement is slow for now, but undeniable. Developing superintelligence is now in sight.

He distinguishes what Meta is going to do with superintelligence from “others in the industry who believe superintelligence should be directed centrally towards automating all valuable work, …”. The “others” here is a poke at OpenAI who, in their Charter, define AGI as “highly autonomous systems that outperform humans at most economically valuable work …” He juxtaposes OpenAI as automating work (for companies and governments) while Meta will put superintelligence in our personal hands for creative and communicative play.

Along the way, Zuckerberg hints that future models may not be open any more, a change in policy. Until now Meta has released open models rather than charging for access. Zuckerberg not worries that “superintelligence will raise novel safety concerns.” For this reason they will need to “be rigorous about mitigating these risks and careful about what we choose to open source.”

Why don’t I trust either Meta or OpenAI company?

Humanitext Antiqua

Here at DH 2025 in Lisbon, Portugal, I heard a paper about a neat Japanese projects, Humanitext Antiqua – ヒューマニテクスト. They allow you to identify an ancient philosophy subcorpus (eg. Plato) and then ask questions of that. I was able to get all sorts of interesting results since they have trained their system on Greek and Roman philosophers like Aristotle and Cicero.

Here is a reference to the project:

Naoya Iwata, Ikko Tanaka, Jun Ogawa, ‘Improving Semantic Search Accuracy of Classical Texts through Context- Oriented Translation’, Proceedings of IPSJ SIG Computers and the Humanities Symposium. Download link: https://researchmap.jp/n.iwata/published_papers/48448512

Colloque « DH@LLM: Grands modèles de langage et humanités numériques » @ IEA & Sorbonne U

DH@LLM: Grands modèles de langage et humanités numériques Colloque organisé par Alexandre Gefen (CNRS-Sorbonne Nouvelle), Glenn Roe (Sorbonne Université), Ayla Rigouts Terryn (Université de Montréal) et Michael Sinatra (Université de Montréal) En collaboration avec l’Observatoire des textes, des idées et des corpus (ObTIC), le Centre de recherche interuniversitaire sur les humanités numériques (CRIHN), l’Institut d’Études […]

Today I gave a keynote to open this symposium on Large Language Models and the digital humanities, Colloque « DH@LLM: Grands modèles de langage et humanités numériques » @ IEA & Sorbonne U. I didn’t talk much about LLMs, instead I talked about “Care and Repair for Responsibility Practices in Artificial Intelligence”. I argued that the digital humanities has a role play in developing the responsibility practices that address the challenges of LLMs. I argued for an ethics of care approach that looks at the relationships between stakeholders (both individual and institutional) and asks how we can care for those more vulnerable and how can we repair emergent systems.

Brandolini’s law

In a 3QuarksDaily post about Bullshit and Cons: Alberto Brandolini and Mark Twain Issue a Warning About Trump I came across Brandolini’s law of Refutation which states:

The amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it.

This law or principle goes a long way to explaining why bullshit, conspiracy theories, and disinformation are so hard to refute. The very act of refutation becomes suspect as if you are protesting too much. The refuter is made to look like the person with an agenda that we should be skeptical of.

The corollary is that it is less work to lie about someone before they have accused you of lying than to try to refute the accusation. Better to accuse the media of purveying fake news early than to wait until they publish news about you.

As for AI hallucinations, which I believe should be called AI bullshit, we can imagine Rockwell’s corollary:

The amount of energy needed to correct for AI hallucinations in a prompted essay is an order of magnitude bigger than the work of just writing it yourself.

CSDH/SCHN Congress 2025: Reframing Togetherness

These last few days I have been at the CSDH/SCHN conference that is part of the Congress 2025. With colleagues and graduate research assistants I was part of a number of papers and panels. See CSDH/SCHN Congress 2025: Reframing TogethernessThe programme is here. Some of the papers I was involved in included:

  • Exploring the Deceptive Patterns of Chinook: Visualization and Storytelling Approaches Critical Software Study – Roya Sharifi; Ralph Padilla; Zahra Farhangfar; Yasmeen Abu-Laban; Eleyan Sawafta; and Geoffrey Rockwell
  • Building a Consortium: An Approach to Sustainability – Geoffrey Martin Rockwell; Michael Sinatra; Susan Brown; John Bradley; Ayushi Khemka; and Andrew MacDonald
  • Integrating Large Language Models with Spyral Notebooks – Sean Lis and Geoffrey Rockwell
  • AI-Driven Textual Analysis to Decode Canadian Immigration Social Media Discourse – Augustine Farinola & Geoffrey Martin Rockwell
  • The List in Text Analysis – Geoffrey Martin Rockwell; Ryan Chartier; and Andrew MacDonald

I was also part of a panel on Generative AI, LLMs, and Knowledge Structures organized by Ray Siemens. My paper was on Forging Interpretations with Generative AI. Here is the abstract:

Using large language models we can now generate fairly sophisticated interpretations of documents using natural language prompts. We can ask for classifications, summaries, visualizations, or specific content to be extracted. In short we can automate content analysis of the sort we used to count as research. As we play with the forging of interpretations at scale we need to consider the ethics of using generative AI in our research. We need to ask how we can use these models with respect for sources, care for transparency, and attention to positionality.

Welcome to the Artificial Intelligence Incident Database

The starting point for information about the AI Incident Database

Maria introduced me to the Artificial Intelligence Incident Database. It contains summaries and links regarding different types of incidents related to AI. Good place to get a sense of the hazards.

The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.

Moral Resposibility

On Thursday (the 29th of May) I gave a talk on Moral Responsibility and Artificial Intelligence at the Canadian AI 2025 conference in Calgary, Alberta.

I discussed what moral responsibility might be in the context of AI and argued for an ethic of care (and repair) approach to building relationships of responsibility and responsibility practices.

There was a Responsible AI track in the conference that had some great talks by Gideon Christian (U of Calgary in Law) and Julita Vassileva (U Saskatchewan.)