‘It was as if my father were actually texting me’: grief in the age of AI

People are turning to chatbot impersonations of lost loved ones to help them grieve. Will AI help us live after we’re dead?

The Guardian has a thorough story about the use of AI to evoke the dead, ‘It was as if my father were actually texting me’: grief in the age of AI. The story talks about how one can train an artificial intelligence on past correspondence to mimic someone who passed. One can imagine academic uses of this where we create clones of historical figures with which to converse. Do we have enough David Hume to create an interesting AI agent?

For all the advances in medicine and technology in recent centuries, the finality of death has never been in dispute. But over the past few months, there has been a surge in the number of people sharing their stories of using ChatGPT to help say goodbye to loved ones. They raise serious questions about the rights of the deceased, and what it means to die. Is Henle’s AI mother a version of the real person? Do we have the right to prevent AI from approximating our personalities after we’re gone? If the living feel comforted by the words of an AI bot impersonation – is that person in some way still alive?

The article mentions some of the ethical quandaries:

  • Do dead people have rights? Or do others have rights related to a dead person’s image, voice, and pattern of conversation?
  • Is it healthy to interact with an AI revivification of a close relative?


The case for taking AI seriously as a threat to humanity

From the Open Philanthropy site I came across this older (2020) Vox article, The case for taking AI seriously as a threat to humanity by Kelsey Piper. The article nicely summarizes some of the history of concerns around AGI (Artificial General Intelligence) as people tend to call an AI so advanced it might be comparable to human intelligence. This history goes back to Turing’s colleague I.J. Good who speculated in 1965 that,

An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

Such an explosion has been called the Singularity by Vernor Vinge and was popularized by Ray Kurzweil.

I came across this following threads on the whole issue of whether AI would soon become an existential threat. The question of the dangers of AI (whether AGI (Artificial General Intelligence) or just narrow AI) has gotten a lot of attention especially since Geoffrey Hinton ended his relationship with Google so he could speak about it. He and other signed a short statement published on the site of the Center for AI Safety,

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

The existential question only become relevant if one believes, as many do, that there is considerable risk that AI research and development is moving so fast that it may soon achieve some level of generality at which point such an AGI could begin act in unpredictable and dangerous ways. Alternatively people could misuse such powerful AGIs to harm us. Open Philanthropy is one group that is focused on Potential Risks form Advanced AI. They could be classed as an organization with a longtermism view, a view that it is important to ethics (and philanthropy) to consider long-term issues.

Advances in AI could lead to extremely positive developments, but could also potentially pose risks from intentional misuse or catastrophic accidents.

Others have called for a Manhattan Project for AI Safety. There are, of course, those (including me) that feel that this is distracting from the immediate unintended effects of AI and/or that there is little existential danger for the moment as AGI is decades off. The cynic in my also wonders how much the distraction is intentional as it both hypes the technology (its dangerous therefore it must be important) or justifies ignoring the stubborn immediate problems like racist bias in the training data.

Kelsey Piper has in the meantime published A Field Guide to AI Safety.

The question still remains whether AI is dangerous enough to merit the sort of ethical attention that nuclear power, for example, has recieved.

Bridging Divides – Research and Innovation

Thanks to my colleague Yasmeen, I was included in an important CFREF, Bridging Divides – Research and Innovation led by Anna Triandafyllidou at Toronto Metropolitan University. Some of the topics I hope to work on include how information technology is being used to surveil and manage immigrants. Conversely, how immigrants use information technology.

Statement on AI Risk

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

The Center for AI Safety has issued a very short Statement on AI Risk (see sentence above.) This has been signed by the likes of Yoshua Bengio and Geoffrey Hinton. I’m not sure if it is an alternative to the much longer Open Letter, but it focuses on the warning without any prescription as to what we should do. The Open Letter was criticized many in the AI community, so perhaps CAIS was trying to find wording that could bring together “AI Scientists” and “Other Notable Figures.”

I personally find this alarmist. I find myself less and less impressed with ChatGPT as it continues to fabricate answers of little use (because they are false.) I tend to agree with Elizabeth Renieris who is quoted in this BBC story on Artificial intelligence could lead to extinction, experts warn to the effect that there are a lot more pressing immediate issues with AI to worry about. She says,

“Advancements in AI will magnify the scale of automated decision-making that is biased, discriminatory, exclusionary or otherwise unfair while also being inscrutable and incontestable,” she said. They would “drive an exponential increase in the volume and spread of misinformation, thereby fracturing reality and eroding the public trust, and drive further inequality, particularly for those who remain on the wrong side of the digital divide”.

All the concern about extinction has me wondering if this isn’t a way of hyping AI to make everyone one and every AI business more important. If there is an existential risk then it must be a priority, and if it is a priority then we should be investing in it because, of course, the Chinese are. (Note that the Chinese have actually presented draft regulations that they will probably enforce.) In other words, the drama of extinction could serve the big AI companies like OpenAI, Microsoft, Google, and Meta in various ways:

  • The drama could convince people that there is real disruptive potential in AI so they should invest now! Get in before it is too late.
  • The drama could lead to regulation which would actually help the big AI companies as they have the capacity to manage regulation in ways that small startups don’t. The big will get bigger with regulation.

I should stress that this is speculation. I probably shouldn’t be so cynical. Instead lets look to what we can do locally.


An experimental open-source attempt to make GPT-4 fully autonomous. – Auto-GPT/README.md at master · Torantulino/Auto-GPT

From a video on 3 Quarks Daily on whether ChatGPT can prompt itself I discovered, Auto-GPT. Auto-GPT is powered by GPT-4. You can describe a mission and it will try to launch tasks, assess them, and complete the mission. Needless to say it was inevitable that someone would find a way to use ChatGPT or one of its relatives to try to complete complicated jobs including taking over the world, as Chaos-GPT claims to want to do (using Auto-GPT.)

How long will it be before someone figures out how to use these tools to do something truly nasty? I give it about 6 months before we get stories of generative AI being used to systematically harass people, or find information on how to harm people, or find ways to waste resources like the paperclip maximizer. Is it surprising that governments like Italy have banned ChatGPT?


U of A computing scientists work with Japanese researchers on virtual reality game to get people out of their seats

U of A computing scientists work with Japanese researchers to refine a virtual and mixed reality video game that can improve motor skills for older adults and sedentary people.

The Folio of the University of Alberta published a story about a trip to Japan that I and others embarked on, U of A computing scientists work with Japanese researchers on virtual reality game to get people out of their seats. Ritsumeikan invited us to develop research collaborations around gaming, language and artificial intelligence. Our visit was a chance to further the collaborations, like the one my colleagues Eleni Stroulia and Victor Fernandez Cervantes are developing with Thawmas Ruck around games for older adults. This inter-university set of collaborations build on projects I was involved in going back to 2011, including a conference (Replaying Japan) and a journal, the Journal of Replaying Japan.

The highlight was the signing of a Memorandum Of Understanding by the two presidents (of U of A and Ritsumeikan). I was also involved as was Professor Nakamura. May the collaboration thrive.

2023 Annual Public Lecture in Philosophy

Last week I gave the 2023 Annual Public Lecture in Philosophy. You can Watch a Recording here. The talk was on The Eliza Effect: Data Ethics for Machine Learning.

I started the talk with the case of Kevin Roose’s interaction with Sydney (Microsoft’s name for Bing Chat) where it ended up telling Roose that it loved him. From there I discussed some of the reasons we should be concerned with the latest generation of chatbots. I then looked at the ethics of LAION-5B as an example of how we can audit the ethics of projects. I ended with some reflections on what an ethics of AI could be.

Pause Giant AI Experiments: An Open Letter

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

The Future of Life Institute is calling on AI labs to pause with a letter signed by over 1000 people (including myself), Pause Giant AI Experiments: An Open Letter – Future of Life Institute. The letter asks for a pause so that safety protocols can be developed,

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.

This letter to AI labs follows a number of essays and opinions that maybe we are going too fast and should show restraint. This in the face of the explosive interest in large language models after ChatGPT.

  • Gary Marcus wrote an essay in his substack on “AI risk ≠ AGI risk” arguing that just because we don’t have AGI doesn’t mean there isn’t risk associated with the Mediocre AI systems we do have.
  • Yuval Noah Harari has an opinion in the New York Times with the title, “You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills” where he talks about the dangers of AIs manipulating culture.

We have summoned an alien intelligence. We don’t know much about it, except that it is extremely powerful and offers us bedazzling gifts but could also hack the foundations of our civilization. We call upon world leaders to respond to this moment at the level of challenge it presents. The first step is to buy time to upgrade our 19th-century institutions for a post-A.I. world and to learn to master A.I. before it masters us.

It is worth wondering whether the letter will have an effect, and if it doesn’t, why we can’t collectively slow down and safely explore AI.

Los chatbots pueden ayudarnos a redescubrir la historia del diálogo

Con el lanzamiento de sofisticados chatbots como ChatGPT de OpenAI, el diálogo eficaz entre humanos e inteligencia artificial se ha vuelto

A Spanish online magazine of ideas, Dialektika, has translated my Conversation essay on ChatGPT and dialogue. See Los chatbots pueden ayudarnos a redescubrir la historia del diálogo. Nice to see the ideas circulating.

How Deepfake Videos Are Used to Spread Disinformation – The New York Times

For the first time, A.I.-generated personas, often used for corporate trainings, were detected in a state-aligned information campaign — opening a new chapter in online manipulation.

The New York Times has a story about how How Deepfake Videos Are Used to Spread Disinformation. The videos are actually from a service Synthesia which allows you to generate videos of talking heads from transcripts that you prepare. They have different professionally acted avatars and their technology will then generate the video of your text being presented. This is supposed to be used for quickly generating training videos (without paying actors), but someone used it for disinformation.