Edmond de Belamy by Obvious

The Portrait of Edward de Belamy is a Generative Adversarial Network (GAN) ink print created by the art group Obvious.

The name “Belamy” is a joke on the name of the developer of GAN, Ian Goodfellow.

The print was bought at auction for $432,500 (USD), a first for AI generated art.

I learned about this from a talk on “Alternative Histories of genAI” by Lauren Tilton. She talked about the institutions involve in authorizing art. Is it art because it was bought for a lot of money?

This talk was part of a conference in Nice and Cannes, Colloque “Créativités artificielles: Approches critiques de l’IA” @ Nice that I also presented at. My talk was on “Thinking-Through Trust in AI”.

The Coming Tsunami of Transnational Repression

Professor Ronald Deibert, Director of the Citizen Lab at U of Toronto, appeared before the House of Commons’ Subcommittee on International Human Rights of the Standing Committee on Foreign Affairs and International Development (SDIR) on March 23rd, 2026, to testify on transnational repression. His brief included a discussion of artificial intelligence and its potential use for repression. He also had a clear recommendation that the government should regulate AI. Here is the recommendation from the text of his testimony:

Regulate AI. The government must squarely and soberly address the huge potential for widespread harm associated with LLMs and AI systems, as well as social media
platforms which are connected to them. Although there are many potential economic and other benefits associated with these systems, the current political and economic context all but assures there will also be major harms emanating from their use and abuse. The government should cease any cooperation with governments on AI that are known perpetrators of TNR and DTR. It should engage in meaningful public consultation with affected communities on how these systems have begun to negatively affect peoples’lives, as called for by the People’s Consultation on AI. And it should find ways to regulateAI uses, particularly among public agencies, to mitigate harms and ensure equitable outcomes. Part of this regulation must include independent due diligence audits of all tech platforms in a transparent and accountable manner consistent with Charter of Human Rights protections on freedom of speech and access to information.

I read about this in a story in the Globe and Mail.

Enshittification

The Norwegian Consumer Council has released a punchy video about enshittification. If you go to the web site at the end you get to a page about Breaking Free. This has a link to a report on Breaking Free: Pathways to a Fair Technological Future (PDF) which argues that generative AI is the next frontier to enshittification. They point to how AI can generate large quantities of slop now sloshing around the internet.

The neologism enshittification was coined by Cory Doctorow. His web site has links to his book on it and videos of him talking about it.

The good news is, as Doctorow puts it in his book on the subject, “A new, good internet is possible. More than that, it is essential.” The final section 5 of the Norwegian report offers advice on how we can break free.

For me, it is essential that we resist the network effect, and just drop services that become unacceptably shitty. When they change the privacy settings, just drop it. It may be painful and it may feel as if your social life won’t recover, but that is what they want you to believe.

AI Isn’t Coming for Everyone’s Job

The Atlantic has a thoughtful article titled, AI Isn’t Coming for Everyone’s Job. The article points out that player pianos automated the playing of pianos in the early 1900s and could even play things humans didn’t have enough fingers for, but that didn’t put piano players out of work.

How could humans possibly compete? Yet today you are more likely to encounter a piano player than a player piano, despite the job being successfully automated a very long time ago. The automatons have been relegated to museums and the rare curiosity. Pianists can be found any night of the week in hotel lobbies, Italian restaurants, and concert halls.

The article goes on to talk about how live music is still appreciated even though many musicians can’t play as well as what you can get on recorded (or automated) music. People like to see, hear, and interact with other people.

It also mentions how people fought back. Above you see an image from an ad in 1930. Earlier John Philip Sousa coined the phrase “canned music” in 1906 to mock the automated sound. (At the time the cylindrical records came in can shaped containers.) According to the Wikipedia article, he testified to Congress,

These talking machines are going to ruin the artistic development of music in this country. When I was a boy… in front of every house in the summer evenings, you would find young people together singing the songs of the day or old songs. Today you hear these infernal machines going night and day. We will not have a vocal cord left. The vocal cord will be eliminated by a process of evolution, as was the tail of man when he came from the ape.

Sounds like some of the concerns we have about AI today, but again, I suspect live music will survive.

The problem is more likely to be arts where there isn’t a live person performing or interacting with you. Does it really matter if illustrations in magazines are made by humans, AIs or hybrids as long as they catch the eye and illustrate the topic? Perhaps the visual arts will shift to live performance art or those online performances like those for YouTube by Bob Ross.

Anthropic is standing up to the Department of War (DoW) on what we might call ethics issues. This story has some interesting angles. 

Originally Anthropic had a contract with the DoW to provide AI services across the government. They had two red lines:

  1. Their AIs couldn’t be used for fully autonomous lethal weapons.
  2. Their AIs couldn’t be used for mass surveillance of US citizens.

The government pushed back and eventually cancelled the contract. Then they designated Anthropic a Supply Chain Risk which could make it hard for any government agency to contract with them. So … they are suing now. Here are some interesting links on the story:

Both are short and worth reading.

Calculating Empires

The creative team of Kate Crawford and Vladan Joler who brought us the Anatomy of an AI System have created a much more ambitious long wall sized infographic called Calculating Empires: https://calculatingempires.net.

Screen shot of Calculating Empires

I saw this at the Jeau de Paume exhibit on The World Through AI. I feel it is the sort of thing I would like a large poster of so I could carefully read it, but … no luck … no posters.

Anyway, it is a fascinating map of communications technology.

Democracy and the Swarm

Gary Marcus’ has posted to his substack Marcus on AI an essay about how AI bot swarms threaten to undermine democracy. This essay reports on an article published in Science by Marcus and others. (Preprint is here.)

In the essay they argue that AI-enabled swarms of synthetic LLM-tuned posts can swarm our public discourse.

Why is this dangerous for democracy? No democracy can guarantee perfect truth, but democratic deliberation depends on something more fragile: the independence of voices. The “wisdom of crowds” works only if the crowd is made of distinct individuals. When one operator can speak through thousands of masks, that independence collapses. We face the rise of synthetic consensus: swarms seeding narratives across disparate niches and amplifying them to create the illusion of grassroots agreement.

What I found particularly disturbing is how this is not just Russian or Chinese manipulation. The essay talks about how venture capital is now investing in swarm tools.

Venture capital is already helping industrialize astroturfing: Doublespeed, backed by Andreessen Horowitz, advertises a way to “orchestrate actions on thousands of social accounts” and to mimic “natural user interaction” on physical devices so the activity appears human.

The essay suggests various solutions, but they don’t mention the “solution” that seems most obvious to me, quit social media and get your news from trusted sources.

Deepfakes and Epistemic Degeneration

Two deepfake images of the pileup of cars.

There are a number of deepfake images of the 100 car pileup on the highway between Calgary and Airdre on the 17th. You can see some here CMcalgary with discussion. These deepfakes raise a number of issues:

  • How would you know it is a deepfake? Do we really have to examine images like this closely to make sure they aren’t fake?
  • Given the proliferation of deepfake images and videos, does anyone believe photos any more? We are in a moment of epistemic transition from generally believing photographs and videos to no longer trusting anything. We have to develop new ways of determining the truth of photographic evidence presented to us. We need to check whether the photograph makes sense; question the authority of whoever shared it; check against other sources; and check authoritative news sources.
  • Liar’s dividend – given the proliferation of deepfakes, public figures can claim anything is fake news in order avoid accountability. In an environment where no one knows what is true, bullshit reigns and people don’t feel they have to believe anything. Instead of the pursuit of truth we all just follow what fits our preconceptions. A example of this is what happened in 2019 when the New Year’s message from President Ali Bongo was not believed as it looked fake leading to an attempted coup.
  • It’s all about attention. We love to look at disaster images so the way to get attention is to generate and share them, even if they are generated. On some platforms you are even rewarded for attention.
  • Trauma is entertaining. We love to look at the trauma of others. Again, generating images of an event like the pileup that we heard about, is a way to get the attention of those looking for images of the trauma.
  • Even when people suspect the images are fake they can provide a “where’s Waldo” sort of entertainment where we comb them for evidence of the fakery.
Image of pileup with containership across the highway.
Pileup with Container Ship
  • Deepfakes then generate more deepfakes and eventually people start responding with ironic deepfakes where a container ship is beached across the highway causing the pileup.
  • Evenutally there may be legal ramifications. On the one hand people may try to use fake images for insurance claims. Insurance companies may then refuse photographs as evidence for a claim. People may treat a fake image as a form of identity theft if it portrays them or identifiable information like a license plate.

 

The Next Generation Frontiers Symposium

The Next Generation Frontiers Symposium is in full swing in Banff! From sustainability to culture, yesterday’s sessions showcased the breadth of ideas shaping the future of AI. In a panel moderated by Hsiao-Ting Tseng, researchers Anfanny Chen, Shih-Fang Chen and Hsien-Tien Lin shared how AI can drive sustainable practices  — from smarter agriculture and resource management to greener supply chains and reduced carbon emissions. Later, Annie En-Shuin Lee, Dane Malenfant, Chi-Jui Hu, and Yun-Pu Tu led a fascinating discussion, moderated by Geoffrey Rockwell, on Indigenous AI and Culture, exploring the relationship between AI, cultural diversity and Indigenous knowledge. The day highlighted how meaningful interdisciplinary exchange can spark fresh perspectives and lead to new frontiers in research. (From here)

I’ve just come back from the Next Generation Frontiers Symposium which was organized by CIFAR, Taiwan’s National Science and Technology Council, (NSTC), and the Research Institute for Democracy, Society and Emerging Technology (DSET). This brought researchers from Taiwan and Canada to talk about Responsible AI, Sovereign AI, AI and Sustainability, and Indigenous AI and Culture. I moderated the Indigenous AI and Culture theme which looked at how AI might impact indigenous communities in both Taiwan and Canada. Some of the reflections include:

  • Indigenous community are often poorly represented in LLMs. We need ways for communities to be able to personalize models for their community with knowledge from their community.
  • The mass scraping of the Internet with little regard for ownership or consent of content creators is more of the extractive and colonizing behaviour that leads many indigenous communities to distrust settler nations.
  • There are knowledge practices and types of knowledge like gendered knowledge, age-specific knowledge, and location-based knowledge that simply cannot be datafied and modelled if they are to maintain their character.
  • Datafication and modelling work with measurable evidence. Anything that can’t be captured, sampled, and measured can’t then be datafied and thus can’t be modelled. Further, there is the danger that such evidence and knowledge will be deligitimized as unmeasurable and eventually excluded and fiction or mysticism. We could end up believing that only what we could datafy and model is knowledge.
  • Western espistemological practices of openness, science and replicable results should not be imposed on communities with different epistemological practices. AI is the product of Western epistemology and thus may never be compatible with indigenous wisdom.
  • We need to respect the desire of some communities to be forgotten and thus not scraped at all for measurable knowledge. Some may choose opacity.
  • Knowledge and its materials taken from communities should be returned. Communities should be supported to develop their own ways of preserving their knowledge including ways of datafying and modelling their knowledge, if they so wish.

Margaret Tu, one of the participants in the session, wrote a moving essay about the need for cultural safety for indigenous communities in the face of disaster in Taiwan. See Taiwan’s Barrier Lake Disaster Intersects With Its Troubled Indigenous Policy. It ends with this wisdom,

Disasters demand speed, but recovery demands reflection. For the Fata’an, healing will not come from relocation alone; it must be rooted in both land and culture.

How To Festival: How to think like an AI Ethicist

On Saturday I gave an online talk on “How to think like AI Ethicist” that was part of a How To Festival. I talked about thinking about responsibility and the issue of “responsibility gaps”. I talked about some key risks like hallucinations, bias, deep fakes, and companion AIs. I also mentioned that we need to celebrate the effective uses of AI and think not just about hazards, but also about AI for good.

Artificial intelligence (AI) is everywhere. We all need to assess what to use and how to use the new tools. In this talk Geoffrey Rockwell will discuss some of the safety issues raised by the new generative AI tools. He will suggest some ways you can think through AI.Geoffrey Rockwell is a Professor for Philosophy and Digital Humanities at the University of Alberta. He is also a Canada CIFAR AI Chair working on responsible AI.EPL’s annual How To Festival is a chance to learn something new from someone who already knows how to do it. A variety of experts from professionals to enthusiasts will share their skills with you.This is an online program. To receive a link and passcode to the online class, please register with your name and email address and instructions will be sent to you within 24 hours of the session.Zoom, a third-party app, will be used for this virtual session. By joining, you acknowledge that EPL does not take responsibility for Zoom’s privacy policies and practice.

Source: How To Festival: How to think like an AI Ethicist