Enshittification

The Norwegian Consumer Council has released a punchy video about enshittification. If you go to the web site at the end you get to a page about Breaking Free. This has a link to a report on Breaking Free: Pathways to a Fair Technological Future (PDF) which argues that generative AI is the next frontier to enshittification. They point to how AI can generate large quantities of slop now sloshing around the internet.

The neologism enshittification was coined by Cory Doctorow. His web site has links to his book on it and videos of him talking about it.

The good news is, as Doctorow puts it in his book on the subject, “A new, good internet is possible. More than that, it is essential.” The final section 5 of the Norwegian report offers advice on how we can break free.

For me, it is essential that we resist the network effect, and just drop services that become unacceptably shitty. When they change the privacy settings, just drop it. It may be painful and it may feel as if your social life won’t recover, but that is what they want you to believe.

AI Isn’t Coming for Everyone’s Job

The Atlantic has a thoughtful article titled, AI Isn’t Coming for Everyone’s Job. The article points out that player pianos automated the playing of pianos in the early 1900s and could even play things humans didn’t have enough fingers for, but that didn’t put piano players out of work.

How could humans possibly compete? Yet today you are more likely to encounter a piano player than a player piano, despite the job being successfully automated a very long time ago. The automatons have been relegated to museums and the rare curiosity. Pianists can be found any night of the week in hotel lobbies, Italian restaurants, and concert halls.

The article goes on to talk about how live music is still appreciated even though many musicians can’t play as well as what you can get on recorded (or automated) music. People like to see, hear, and interact with other people.

It also mentions how people fought back. Above you see an image from an ad in 1930. Earlier John Philip Sousa coined the phrase “canned music” in 1906 to mock the automated sound. (At the time the cylindrical records came in can shaped containers.) According to the Wikipedia article, he testified to Congress,

These talking machines are going to ruin the artistic development of music in this country. When I was a boy… in front of every house in the summer evenings, you would find young people together singing the songs of the day or old songs. Today you hear these infernal machines going night and day. We will not have a vocal cord left. The vocal cord will be eliminated by a process of evolution, as was the tail of man when he came from the ape.

Sounds like some of the concerns we have about AI today, but again, I suspect live music will survive.

The problem is more likely to be arts where there isn’t a live person performing or interacting with you. Does it really matter if illustrations in magazines are made by humans, AIs or hybrids as long as they catch the eye and illustrate the topic? Perhaps the visual arts will shift to live performance art or those online performances like those for YouTube by Bob Ross.

Anthropic is standing up to the Department of War (DoW) on what we might call ethics issues. This story has some interesting angles. 

Originally Anthropic had a contract with the DoW to provide AI services across the government. They had two red lines:

  1. Their AIs couldn’t be used for fully autonomous lethal weapons.
  2. Their AIs couldn’t be used for mass surveillance of US citizens.

The government pushed back and eventually cancelled the contract. Then they designated Anthropic a Supply Chain Risk which could make it hard for any government agency to contract with them. So … they are suing now. Here are some interesting links on the story:

Both are short and worth reading.

Indigitization

At the Spokenweb conference last summer I heard Gerry Lawson talk about the Indigitization.ca project for which he is the Technical Lead. This is a neat project at UBC that has kits, guides, and small grants for indigenous communities to “facilitate capacity building in Indigenous information management.” Communities can get a kit that lets them video their elders to create a digital archive of their cultural information. They do it them selves with help from the Indigitization projet.

From the Digital Storage Guide

They have a great Toolkit with guides on all sorts of subjects. The image above is from the Digital Storage guide. These guides are useful to anyone doing digitization projects!

Calculating Empires

The creative team of Kate Crawford and Vladan Joler who brought us the Anatomy of an AI System have created a much more ambitious long wall sized infographic called Calculating Empires: https://calculatingempires.net.

Screen shot of Calculating Empires

I saw this at the Jeau de Paume exhibit on The World Through AI. I feel it is the sort of thing I would like a large poster of so I could carefully read it, but … no luck … no posters.

Anyway, it is a fascinating map of communications technology.

Democracy and the Swarm

Gary Marcus’ has posted to his substack Marcus on AI an essay about how AI bot swarms threaten to undermine democracy. This essay reports on an article published in Science by Marcus and others. (Preprint is here.)

In the essay they argue that AI-enabled swarms of synthetic LLM-tuned posts can swarm our public discourse.

Why is this dangerous for democracy? No democracy can guarantee perfect truth, but democratic deliberation depends on something more fragile: the independence of voices. The “wisdom of crowds” works only if the crowd is made of distinct individuals. When one operator can speak through thousands of masks, that independence collapses. We face the rise of synthetic consensus: swarms seeding narratives across disparate niches and amplifying them to create the illusion of grassroots agreement.

What I found particularly disturbing is how this is not just Russian or Chinese manipulation. The essay talks about how venture capital is now investing in swarm tools.

Venture capital is already helping industrialize astroturfing: Doublespeed, backed by Andreessen Horowitz, advertises a way to “orchestrate actions on thousands of social accounts” and to mimic “natural user interaction” on physical devices so the activity appears human.

The essay suggests various solutions, but they don’t mention the “solution” that seems most obvious to me, quit social media and get your news from trusted sources.

Sharing What You Did: Documenting Text Analysis Research with Voyant and Spyral – Session 1

On Friday I gave the first of three workshops on Spyral, the Voyant extension notebook programming environment. This was given online and supported Bridging Divides. Augustine Farinola developed it with me. Here are two key links:

Deepfakes and Epistemic Degeneration

Two deepfake images of the pileup of cars.

There are a number of deepfake images of the 100 car pileup on the highway between Calgary and Airdre on the 17th. You can see some here CMcalgary with discussion. These deepfakes raise a number of issues:

  • How would you know it is a deepfake? Do we really have to examine images like this closely to make sure they aren’t fake?
  • Given the proliferation of deepfake images and videos, does anyone believe photos any more? We are in a moment of epistemic transition from generally believing photographs and videos to no longer trusting anything. We have to develop new ways of determining the truth of photographic evidence presented to us. We need to check whether the photograph makes sense; question the authority of whoever shared it; check against other sources; and check authoritative news sources.
  • Liar’s dividend – given the proliferation of deepfakes, public figures can claim anything is fake news in order avoid accountability. In an environment where no one knows what is true, bullshit reigns and people don’t feel they have to believe anything. Instead of the pursuit of truth we all just follow what fits our preconceptions. A example of this is what happened in 2019 when the New Year’s message from President Ali Bongo was not believed as it looked fake leading to an attempted coup.
  • It’s all about attention. We love to look at disaster images so the way to get attention is to generate and share them, even if they are generated. On some platforms you are even rewarded for attention.
  • Trauma is entertaining. We love to look at the trauma of others. Again, generating images of an event like the pileup that we heard about, is a way to get the attention of those looking for images of the trauma.
  • Even when people suspect the images are fake they can provide a “where’s Waldo” sort of entertainment where we comb them for evidence of the fakery.
Image of pileup with containership across the highway.
Pileup with Container Ship
  • Deepfakes then generate more deepfakes and eventually people start responding with ironic deepfakes where a container ship is beached across the highway causing the pileup.
  • Evenutally there may be legal ramifications. On the one hand people may try to use fake images for insurance claims. Insurance companies may then refuse photographs as evidence for a claim. People may treat a fake image as a form of identity theft if it portrays them or identifiable information like a license plate.

 

Declaration of Independence – First E-Text

Project Gutenberg and the Declaration of Independence

I came across a blog post about how Michael S. Hart, the founder of Project Gutenberg started in 1971 by typing the Declaration of Independence into the ARPANET and sending it to others. See 50 Years at Project Gutenberg.

Forty-Five Years of Digitizing Ebooks: Project Gutenberg’s Practices by Gregory B. Newby is a longer thing on the history of Project Gutenberg’s processes.

Hart passed in 2011. Gregory B. Newby just passed away this October. The Project, however seems to be in good hands with a foundation and board.

We’re Norman Rockwell’s family. Trump’s DHS has shamefully misused his work. | Opinion

The Problem We All Live With, Norman Rockwell, 1964

As Norman Rockwell’s family, we know he’d be devastated to see the Department of Homeland Security’s unauthorized misuse of his work.

Members of my family noticed over the last weeks that the DHS is using Norman Rockwell’s works without permission. We got together to write this opinion peice, We’re Norman Rockwell’s family. Trump’s DHS has shamefully misused his work. | Opinion

If Norman Rockwell were alive today, he would be devastated to see that his own work has been marshalled for the cause of persecution toward immigrant communities and people of color.

ArtNet now has a story about our Opinion as does the New York Times.