Anthropic is standing up to the Department of War (DoW) on what we might call ethics issues. This story has some interesting angles. 

Originally Anthropic had a contract with the DoW to provide AI services across the government. They had two red lines:

  1. Their AIs couldn’t be used for fully autonomous lethal weapons.
  2. Their AIs couldn’t be used for mass surveillance of US citizens.

The government pushed back and eventually cancelled the contract. Then they designated Anthropic a Supply Chain Risk which could make it hard for any government agency to contract with them. So … they are suing now. Here are some interesting links on the story:

Both are short and worth reading.

Indigitization

At the Spokenweb conference last summer I heard Gerry Lawson talk about the Indigitization.ca project for which he is the Technical Lead. This is a neat project at UBC that has kits, guides, and small grants for indigenous communities to “facilitate capacity building in Indigenous information management.” Communities can get a kit that lets them video their elders to create a digital archive of their cultural information. They do it them selves with help from the Indigitization projet.

From the Digital Storage Guide

They have a great Toolkit with guides on all sorts of subjects. The image above is from the Digital Storage guide. These guides are useful to anyone doing digitization projects!

Calculating Empires

The creative team of Kate Crawford and Vladan Joler who brought us the Anatomy of an AI System have created a much more ambitious long wall sized infographic called Calculating Empires: https://calculatingempires.net.

Screen shot of Calculating Empires

I saw this at the Jeau de Paume exhibit on The World Through AI. I feel it is the sort of thing I would like a large poster of so I could carefully read it, but … no luck … no posters.

Anyway, it is a fascinating map of communications technology.

Democracy and the Swarm

Gary Marcus’ has posted to his substack Marcus on AI an essay about how AI bot swarms threaten to undermine democracy. This essay reports on an article published in Science by Marcus and others. (Preprint is here.)

In the essay they argue that AI-enabled swarms of synthetic LLM-tuned posts can swarm our public discourse.

Why is this dangerous for democracy? No democracy can guarantee perfect truth, but democratic deliberation depends on something more fragile: the independence of voices. The “wisdom of crowds” works only if the crowd is made of distinct individuals. When one operator can speak through thousands of masks, that independence collapses. We face the rise of synthetic consensus: swarms seeding narratives across disparate niches and amplifying them to create the illusion of grassroots agreement.

What I found particularly disturbing is how this is not just Russian or Chinese manipulation. The essay talks about how venture capital is now investing in swarm tools.

Venture capital is already helping industrialize astroturfing: Doublespeed, backed by Andreessen Horowitz, advertises a way to “orchestrate actions on thousands of social accounts” and to mimic “natural user interaction” on physical devices so the activity appears human.

The essay suggests various solutions, but they don’t mention the “solution” that seems most obvious to me, quit social media and get your news from trusted sources.

Sharing What You Did: Documenting Text Analysis Research with Voyant and Spyral – Session 1

On Friday I gave the first of three workshops on Spyral, the Voyant extension notebook programming environment. This was given online and supported Bridging Divides. Augustine Farinola developed it with me. Here are two key links:

Deepfakes and Epistemic Degeneration

Two deepfake images of the pileup of cars.

There are a number of deepfake images of the 100 car pileup on the highway between Calgary and Airdre on the 17th. You can see some here CMcalgary with discussion. These deepfakes raise a number of issues:

  • How would you know it is a deepfake? Do we really have to examine images like this closely to make sure they aren’t fake?
  • Given the proliferation of deepfake images and videos, does anyone believe photos any more? We are in a moment of epistemic transition from generally believing photographs and videos to no longer trusting anything. We have to develop new ways of determining the truth of photographic evidence presented to us. We need to check whether the photograph makes sense; question the authority of whoever shared it; check against other sources; and check authoritative news sources.
  • Liar’s dividend – given the proliferation of deepfakes, public figures can claim anything is fake news in order avoid accountability. In an environment where no one knows what is true, bullshit reigns and people don’t feel they have to believe anything. Instead of the pursuit of truth we all just follow what fits our preconceptions. A example of this is what happened in 2019 when the New Year’s message from President Ali Bongo was not believed as it looked fake leading to an attempted coup.
  • It’s all about attention. We love to look at disaster images so the way to get attention is to generate and share them, even if they are generated. On some platforms you are even rewarded for attention.
  • Trauma is entertaining. We love to look at the trauma of others. Again, generating images of an event like the pileup that we heard about, is a way to get the attention of those looking for images of the trauma.
  • Even when people suspect the images are fake they can provide a “where’s Waldo” sort of entertainment where we comb them for evidence of the fakery.
Image of pileup with containership across the highway.
Pileup with Container Ship
  • Deepfakes then generate more deepfakes and eventually people start responding with ironic deepfakes where a container ship is beached across the highway causing the pileup.
  • Evenutally there may be legal ramifications. On the one hand people may try to use fake images for insurance claims. Insurance companies may then refuse photographs as evidence for a claim. People may treat a fake image as a form of identity theft if it portrays them or identifiable information like a license plate.

 

Declaration of Independence – First E-Text

Project Gutenberg and the Declaration of Independence

I came across a blog post about how Michael S. Hart, the founder of Project Gutenberg started in 1971 by typing the Declaration of Independence into the ARPANET and sending it to others. See 50 Years at Project Gutenberg.

Forty-Five Years of Digitizing Ebooks: Project Gutenberg’s Practices by Gregory B. Newby is a longer thing on the history of Project Gutenberg’s processes.

Hart passed in 2011. Gregory B. Newby just passed away this October. The Project, however seems to be in good hands with a foundation and board.

We’re Norman Rockwell’s family. Trump’s DHS has shamefully misused his work. | Opinion

The Problem We All Live With, Norman Rockwell, 1964

As Norman Rockwell’s family, we know he’d be devastated to see the Department of Homeland Security’s unauthorized misuse of his work.

Members of my family noticed over the last weeks that the DHS is using Norman Rockwell’s works without permission. We got together to write this opinion peice, We’re Norman Rockwell’s family. Trump’s DHS has shamefully misused his work. | Opinion

If Norman Rockwell were alive today, he would be devastated to see that his own work has been marshalled for the cause of persecution toward immigrant communities and people of color.

ArtNet now has a story about our Opinion as does the New York Times.

The Next Generation Frontiers Symposium

The Next Generation Frontiers Symposium is in full swing in Banff! From sustainability to culture, yesterday’s sessions showcased the breadth of ideas shaping the future of AI. In a panel moderated by Hsiao-Ting Tseng, researchers Anfanny Chen, Shih-Fang Chen and Hsien-Tien Lin shared how AI can drive sustainable practices  — from smarter agriculture and resource management to greener supply chains and reduced carbon emissions. Later, Annie En-Shuin Lee, Dane Malenfant, Chi-Jui Hu, and Yun-Pu Tu led a fascinating discussion, moderated by Geoffrey Rockwell, on Indigenous AI and Culture, exploring the relationship between AI, cultural diversity and Indigenous knowledge. The day highlighted how meaningful interdisciplinary exchange can spark fresh perspectives and lead to new frontiers in research. (From here)

I’ve just come back from the Next Generation Frontiers Symposium which was organized by CIFAR, Taiwan’s National Science and Technology Council, (NSTC), and the Research Institute for Democracy, Society and Emerging Technology (DSET). This brought researchers from Taiwan and Canada to talk about Responsible AI, Sovereign AI, AI and Sustainability, and Indigenous AI and Culture. I moderated the Indigenous AI and Culture theme which looked at how AI might impact indigenous communities in both Taiwan and Canada. Some of the reflections include:

  • Indigenous community are often poorly represented in LLMs. We need ways for communities to be able to personalize models for their community with knowledge from their community.
  • The mass scraping of the Internet with little regard for ownership or consent of content creators is more of the extractive and colonizing behaviour that leads many indigenous communities to distrust settler nations.
  • There are knowledge practices and types of knowledge like gendered knowledge, age-specific knowledge, and location-based knowledge that simply cannot be datafied and modelled if they are to maintain their character.
  • Datafication and modelling work with measurable evidence. Anything that can’t be captured, sampled, and measured can’t then be datafied and thus can’t be modelled. Further, there is the danger that such evidence and knowledge will be deligitimized as unmeasurable and eventually excluded and fiction or mysticism. We could end up believing that only what we could datafy and model is knowledge.
  • Western espistemological practices of openness, science and replicable results should not be imposed on communities with different epistemological practices. AI is the product of Western epistemology and thus may never be compatible with indigenous wisdom.
  • We need to respect the desire of some communities to be forgotten and thus not scraped at all for measurable knowledge. Some may choose opacity.
  • Knowledge and its materials taken from communities should be returned. Communities should be supported to develop their own ways of preserving their knowledge including ways of datafying and modelling their knowledge, if they so wish.

Margaret Tu, one of the participants in the session, wrote a moving essay about the need for cultural safety for indigenous communities in the face of disaster in Taiwan. See Taiwan’s Barrier Lake Disaster Intersects With Its Troubled Indigenous Policy. It ends with this wisdom,

Disasters demand speed, but recovery demands reflection. For the Fata’an, healing will not come from relocation alone; it must be rooted in both land and culture.

How To Festival: How to think like an AI Ethicist

On Saturday I gave an online talk on “How to think like AI Ethicist” that was part of a How To Festival. I talked about thinking about responsibility and the issue of “responsibility gaps”. I talked about some key risks like hallucinations, bias, deep fakes, and companion AIs. I also mentioned that we need to celebrate the effective uses of AI and think not just about hazards, but also about AI for good.

Artificial intelligence (AI) is everywhere. We all need to assess what to use and how to use the new tools. In this talk Geoffrey Rockwell will discuss some of the safety issues raised by the new generative AI tools. He will suggest some ways you can think through AI.Geoffrey Rockwell is a Professor for Philosophy and Digital Humanities at the University of Alberta. He is also a Canada CIFAR AI Chair working on responsible AI.EPL’s annual How To Festival is a chance to learn something new from someone who already knows how to do it. A variety of experts from professionals to enthusiasts will share their skills with you.This is an online program. To receive a link and passcode to the online class, please register with your name and email address and instructions will be sent to you within 24 hours of the session.Zoom, a third-party app, will be used for this virtual session. By joining, you acknowledge that EPL does not take responsibility for Zoom’s privacy policies and practice.

Source: How To Festival: How to think like an AI Ethicist