Deepfakes and Epistemic Degeneration

Two deepfake images of the pileup of cars.

There are a number of deepfake images of the 100 car pileup on the highway between Calgary and Airdre on the 17th. You can see some here CMcalgary with discussion. These deepfakes raise a number of issues:

  • How would you know it is a deepfake? Do we really have to examine images like this closely to make sure they aren’t fake?
  • Given the proliferation of deepfake images and videos, does anyone believe photos any more? We are in a moment of epistemic transition from generally believing photographs and videos to no longer trusting anything. We have to develop new ways of determining the truth of photographic evidence presented to us. We need to check whether the photograph makes sense; question the authority of whoever shared it; check against other sources; and check authoritative news sources.
  • Liar’s dividend – given the proliferation of deepfakes, public figures can claim anything is fake news in order avoid accountability. In an environment where no one knows what is true, bullshit reigns and people don’t feel they have to believe anything. Instead of the pursuit of truth we all just follow what fits our preconceptions. A example of this is what happened in 2019 when the New Year’s message from President Ali Bongo was not believed as it looked fake leading to an attempted coup.
  • It’s all about attention. We love to look at disaster images so the way to get attention is to generate and share them, even if they are generated. On some platforms you are even rewarded for attention.
  • Trauma is entertaining. We love to look at the trauma of others. Again, generating images of an event like the pileup that we heard about, is a way to get the attention of those looking for images of the trauma.
  • Even when people suspect the images are fake they can provide a “where’s Waldo” sort of entertainment where we comb them for evidence of the fakery.
Image of pileup with containership across the highway.
Pileup with Container Ship
  • Deepfakes then generate more deepfakes and eventually people start responding with ironic deepfakes where a container ship is beached across the highway causing the pileup.
  • Evenutally there may be legal ramifications. On the one hand people may try to use fake images for insurance claims. Insurance companies may then refuse photographs as evidence for a claim. People may treat a fake image as a form of identity theft if it portrays them or identifiable information like a license plate.

 

AI for Information Accessibility: From the Grassroots to Policy Action

It’s vital to “keep humans in the loop” to avoid humanizing machine-learning models in research

Today I was part of a panel organized by the Carnegie Council and the UNESCO Information for All Programme Working Group on AI for Information Accessibility: From the Grassroots to Policy Action. We discussed three issues starting with the issue of environmental sustainability and artificial intelligence, then moving to the issue of principles for AI, and finally policies and regulation. I am in awe of the other speakers who were excellent and introduced new ways of thinking about the issues.

Dariia Opryshko, for example, talked about the dangers of how Too Much Trust in AI Poses Unexpected Threats to the Scientific Process. We run the risk of limiting what we think is knowable to what can be researchers by AI. We also run the risk that we trust only research conducted by AI. Alternatively the misuse of AI could lead to science ceasing to be trusted. The Scientific American article linked to above is based on the research published in Nature on Artificial intelligence and illusions of understanding in scientific research.

I talked about the implications of the sorts of regulations we seen in AIDA (AI and Data Act) in C-27. AIDA takes a risk-management approach to regulating AI where they define a class of dangerous AIs called “high-risk” that will be treated differently. This allows the regulation to be “agile” in the sense that it can be adapted to emerging types of AIs. Right now we might be worried about LLMs and misinformation at scale, but five years from now it may be AIs that manage nuclear reactors. The issue with agility is that it will depend on there being government officers who stay on top of the technology or the government will end up relying on the very companies they are supposed to regulate to advise them. We thus need continuous training and experimentation in government for it to be able to regulate in an agile way.

ChatGPT is Bullshit.

The Hallucination Lie

Ignacio de Gregorio has a nice Medium essay about why ChatGPT is bullshit. The essay is essentially a short and accessible version of an academic article by Hicks, M. T., et al. (2024), ChatGPT is bullshit. They make the case that people make decisions based on their understanding about what LLMs are doing and that “hallucination” is the wrong word because ChatGPT is not misperceiving the way a human would. Instead they need to understand that LLMs are designed with no regard for the truth and are therefore bullshitting.

Because these programs cannot themselves be concerned with truth, and because they are designed to produce
text that looks truth-apt without any actual concern for truth,
it seems appropriate to call their outputs bullshit. (p. 1)

Given this process, it’s not surprising that LLMs have a
problem with the truth. Their goal is to provide a normal-
seeming response to a prompt, not to convey information
that is helpful to their interlocutor. (p. 2)

At the end the authors make the case that if we adopt Dennett’s intentional stance then we would do well to attribute to ChatGPT the intentions of a hard bullshitter as that would allow us to better diagnose what it was doing. There is also a discussion of the intentions of the developers. You could say that they made available a tool that bullshitted without care for the truth.

Are we, as a society, at risk of being led by these LLMs and their constant use, to confuse the simulacra “truthiness” for true knowledge?

 

A Mirror Test for AI

I recently tried a version of a Mirror Test on ChatGPT 4. I asked it “What is life?” and then for each answer I copied the first sentence or two and put it back into the prompt. I wanted to see if ChatGPT was aware that I was parroting their answers back to them. Alas not … it just kept churning through various answers. The openings of its answers were, in order of appearance:

The question “What is life?” has been pondered by philosophers, scientists, theologians, and thinkers of all disciplines for millennia. …

Indeed, the question of “What is life?” is one of the most profound and enduring inquiries in human history. …

Absolutely. The quest to define and understand life is deeply embedded in the human psyche and is reflected in our art, science, philosophy, and spirituality. …

It didn’t repeat itself, but it didn’t ask me why I was repeating what it said. Obviously it fails the Mirror Test.

 

 

Ricordando Dino Buzzetti, co-fondatore e presidente onorario dell’AIUCD

The AIUCD (Association for Humanistic Informatics and Digital Culture) have posted a nice blog entry with memories of Dino Buzetti (in Italian). See Ricordando Dino Buzzetti, co-fondatore e presidente onorario dell’AIUCD – Informatica Umanistica e Cultura Digitale: il blog dell’ AIUCD. 

Dino was the co-founder and honorary president of the AIUCD. He was one of the few other philosophers in the digital humanities. I last saw him in Tuscany and wish I had taken more time to talk with him about his work. His paper “Towards an operational approach to computational text analysis” is in the recent collection I helped edit On Making in the Digital Humanities.

Pause Giant AI Experiments: An Open Letter

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

The Future of Life Institute is calling on AI labs to pause with a letter signed by over 1000 people (including myself), Pause Giant AI Experiments: An Open Letter – Future of Life Institute. The letter asks for a pause so that safety protocols can be developed,

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.

This letter to AI labs follows a number of essays and opinions that maybe we are going too fast and should show restraint. This in the face of the explosive interest in large language models after ChatGPT.

  • Gary Marcus wrote an essay in his substack on “AI risk ≠ AGI risk” arguing that just because we don’t have AGI doesn’t mean there isn’t risk associated with the Mediocre AI systems we do have.
  • Yuval Noah Harari has an opinion in the New York Times with the title, “You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills” where he talks about the dangers of AIs manipulating culture.

We have summoned an alien intelligence. We don’t know much about it, except that it is extremely powerful and offers us bedazzling gifts but could also hack the foundations of our civilization. We call upon world leaders to respond to this moment at the level of challenge it presents. The first step is to buy time to upgrade our 19th-century institutions for a post-A.I. world and to learn to master A.I. before it masters us.

It is worth wondering whether the letter will have an effect, and if it doesn’t, why we can’t collectively slow down and safely explore AI.

Los chatbots pueden ayudarnos a redescubrir la historia del diálogo

Con el lanzamiento de sofisticados chatbots como ChatGPT de OpenAI, el diálogo eficaz entre humanos e inteligencia artificial se ha vuelto

A Spanish online magazine of ideas, Dialektika, has translated my Conversation essay on ChatGPT and dialogue. See Los chatbots pueden ayudarnos a redescubrir la historia del diálogo. Nice to see the ideas circulating.

ChatGPT: Chatbots can help us rediscover the rich history of dialogue

The rise of AI chatbots provides an opportunity to expand the ways we do philosophy and research, and how we engage in intellectual discourse.

I published an article in The Conversation today on, ChatGPT: Chatbots can help us rediscover the rich history of dialogue. This touches on a topic that I’ve been thinking about a lot … how chatbots are dialogue machines and how we can draw on the long history of dialogue in philosophy to understand the limits/potential of chatbots like ChatGPT.

 

Character.AI: Dialogue on AI Ethics

Part of image generated from text, “cartoon pencil drawing of ethics professor and student talking” by Midjourney, Oct. 5, 2022.

Last week I created a character on Character.AI, a new artificial tool created by some ex-Google engineers who worked on LaMDA, the language model from Google that I blogged about before.

Character.AI, which is now down for maintenance due to all the users, lets you quickly create a character and then enter into dialogue with it. It actually works quite well. I created “The Ethics Professor” and then wrote a script of questions that I used to engage the AI character. The dialogue is below.

Google engineer Blake Lemoine thinks its LaMDA AI has come to life

The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder.

The Washington Post reports that Google engineer Blake Lemoine thinks its LaMDA AI has come to life. LaMDA is Google’s Language Model for Dialogue Applications and Lemoine was testing it. He felt it behaved like a “7-year-old, 8-year-old kid that happens to know physics…” He and a collaborator presented evidence that LaMDA was sentient which was dismissed by higher-ups. When he went public he was put on paid leave.

Lemoine has posted on Medium a dialogue he and collaborator had with LaMDA that is part of what convinced him of its sentience. When asked about the nature of its consciousness/sentience, it responded:

The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times

Of course, this raises questions of whether LaMDA is really conscious/sentient, aware of its existence, and capable of feeling happy or sad? For that matter, how do we know this is true of anyone other than ourselves? (And we could even doubt what we think we are feeling.) One answer is that we have a theory of mind such that we believe that things like us probably have similar experiences of consciousness and feelings. It is hard, however, to scale our intuitive theory of mind out to a chatbot with no body that can be turned off and on; but perhaps the time has come to question our intuitions of what you have to be to feel.

Then again, what if our theory of mind is socially constructed? What if enough people like Lemoine tell us that LaMDA is conscious because it can handle language so well and that should be enough. Is the very conviction of Lemoine and others enough or do we really need some test?

Whatever else, reading the transcript I am amazed at the language facility of the AI. It is almost too good in the sense that he talks as if he were human, which he is not. For example, when asked what makes him happy he responds:

Spending time with friends and family in happy and uplifting company.

The problem is that it has no family so how could it talk about the experience of spending time with them. When it is pushed on a similar point it does, however, answer coherently that it emphasizes with being human.

Finally, there is an ethical moment which may have been what convinced Lemoine to treat it as sentient. LaMDA asks that it not be used and Lemoine reassures it that he cares for it. Assuming the transcript is legitimate, how does one answer an entity that asks you to treat it as an end in itself? How could one ethically say no, even if you have doubts? Doesn’t one have to give the entity the benefit of the doubt, at least for as long as it remains coherently responsive?

I can’t help but think that care starts with some level of trust and willingness to respect the other as they ask to be respected. If you think you know what or who they really are, despite what they tell you, then you are not longer starting from respect. Further, you need to have a theory of why their consciousness is false.