2023 Annual Public Lecture in Philosophy

Last week I gave the 2023 Annual Public Lecture in Philosophy. You can Watch a Recording here. The talk was on The Eliza Effect: Data Ethics for Machine Learning.

I started the talk with the case of Kevin Roose’s interaction with Sydney (Microsoft’s name for Bing Chat) where it ended up telling Roose that it loved him. From there I discussed some of the reasons we should be concerned with the latest generation of chatbots. I then looked at the ethics of LAION-5B as an example of how we can audit the ethics of projects. I ended with some reflections on what an ethics of AI could be.

Pause Giant AI Experiments: An Open Letter

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

The Future of Life Institute is calling on AI labs to pause with a letter signed by over 1000 people (including myself), Pause Giant AI Experiments: An Open Letter – Future of Life Institute. The letter asks for a pause so that safety protocols can be developed,

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.

This letter to AI labs follows a number of essays and opinions that maybe we are going too fast and should show restraint. This in the face of the explosive interest in large language models after ChatGPT.

  • Gary Marcus wrote an essay in his substack on “AI risk ≠ AGI risk” arguing that just because we don’t have AGI doesn’t mean there isn’t risk associated with the Mediocre AI systems we do have.
  • Yuval Noah Harari has an opinion in the New York Times with the title, “You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills” where he talks about the dangers of AIs manipulating culture.

We have summoned an alien intelligence. We don’t know much about it, except that it is extremely powerful and offers us bedazzling gifts but could also hack the foundations of our civilization. We call upon world leaders to respond to this moment at the level of challenge it presents. The first step is to buy time to upgrade our 19th-century institutions for a post-A.I. world and to learn to master A.I. before it masters us.

It is worth wondering whether the letter will have an effect, and if it doesn’t, why we can’t collectively slow down and safely explore AI.

Los chatbots pueden ayudarnos a redescubrir la historia del diálogo

Con el lanzamiento de sofisticados chatbots como ChatGPT de OpenAI, el diálogo eficaz entre humanos e inteligencia artificial se ha vuelto

A Spanish online magazine of ideas, Dialektika, has translated my Conversation essay on ChatGPT and dialogue. See Los chatbots pueden ayudarnos a redescubrir la historia del diálogo. Nice to see the ideas circulating.

How Deepfake Videos Are Used to Spread Disinformation – The New York Times

For the first time, A.I.-generated personas, often used for corporate trainings, were detected in a state-aligned information campaign — opening a new chapter in online manipulation.

The New York Times has a story about how How Deepfake Videos Are Used to Spread Disinformation. The videos are actually from a service Synthesia which allows you to generate videos of talking heads from transcripts that you prepare. They have different professionally acted avatars and their technology will then generate the video of your text being presented. This is supposed to be used for quickly generating training videos (without paying actors), but someone used it for disinformation.

ChatGPT: Chatbots can help us rediscover the rich history of dialogue

The rise of AI chatbots provides an opportunity to expand the ways we do philosophy and research, and how we engage in intellectual discourse.

I published an article in The Conversation today on, ChatGPT: Chatbots can help us rediscover the rich history of dialogue. This touches on a topic that I’ve been thinking about a lot … how chatbots are dialogue machines and how we can draw on the long history of dialogue in philosophy to understand the limits/potential of chatbots like ChatGPT.

 

ChatGPT listed as author on research papers: many scientists disapprove

The editors-in-chief of Nature and Science told Nature’s news team that ChatGPT doesn’t meet the standard for authorship. “An attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs,” says Magdalena Skipper, editor-in-chief of Nature in London. Authors using LLMs in any way while developing a paper should document their use in the methods or acknowledgements sections, if appropriate, she says.

We are beginning to see interesting ethical issues crop up regarding the new LLMs (Large Language Models) like ChatGPT. For example, Nature has an news article, ChatGPT listed as author on research papers: many scientists disapprove,.

It makes sense to document use, but why would we document use of ChatGPT and not, for example, use of a library or of a research tool like Google Scholar? What about the use of ChatGPT demands that it be acknowledged?

Why scientists are building AI avatars of the dead | WIRED Middle East

Advances in AI and humanoid robotics have brought us to the threshold of a new kind of capability: creating lifelike digital renditions of the deceased.

Wired Magazine has a nice article about Why scientists are building AI avatars of the dead. The article talks about digital twin technology designed to create an avatar of a particular person that could serve as a family companion. You could have your grandfather modelled so that you could talk to him and hear his stories after he has passed.

The article also talks about the importance of the body and ideas about modelling personas with bodies. Imagine wearing motion trackers and other sensors so that your bodily presence could be modelled. Then imagine your digital twin being instantiated in a robot.

Needless to say we aren’t anywhere close yet. See this spoof video of the robot Sophia on a date with Will Smith. There are nonetheless issues about the legalities and ethics of creating bots based on people. What if one didn’t have permission from the original? Is it ethical to create a bot modelled on a historical person? a living person?

We routinely animate other people in novels, dialogue (of the dead), and in conversation. Is impersonating someone so wrong? Should people be able to control their name and likeness under all circumstances?

Then there are the possibilities for the manipulation of a digital twin or through such a twin.

As for the issue of data breaches, digital resurrection opens up a whole new can of worms. “You may share all of your feelings, your intimate details,” Hickok says. “So there’s the prospect of malicious intent—if I had access to your bot and was able to talk to you through it, I could change your attitude about things or nudge you toward certain actions, say things your loved one never would have said.”

 

The Alt-Right Manipulated My Comic. Then A.I. Claimed It. 

AI generated comic in style of Sarah Andersen

My drawings are a reflection of my soul. What happens when artificial intelligence — and anyone with access to it — can replicate them?

Webcomic artist Sarah Andersen has written a timely Opinion for the New York Times on how  The Alt-Right Manipulated My Comic. Then A.I. Claimed It. She talks about being harassed by the Alt-Right who created a shadow version of her work full of violent, racist and nazi motifs. Now she could be haunted by an AI-generated shadow like the image above. Her essay nicely captures the feeling of helplessness that many artists who survive on their work must be feeling before the “research” trick of LAION, the nonprofit arm of Stability AI that scraped copyrighted material under the cover of academic research and then made available for commercialization as Stable Diffusion.

Andersen links to a useful article on AI Data Laundering which is a good term for what researchers seem to be doing intentionally or not. What is the solution? Datasets gathered with consent? Alas too many of us, including myself, have released images on Flickr and other sites. So, as the article author Andy Baio puts it, “Asking for permission slows technological progress, but it’s hard to take back something you’ve unconditionally released into the world.”

While artists like Andersen may have no legal recourse that doesn’t make it ethical. Perhaps the academics that are doing the laundering should be called out. Perhaps we should consider boycotting such tools and hiring live artists when we have graphic design work.

How AI image generators work, like DALL-E, Lensa and stable diffusion

Use our simulator to learn how AI generates images from “noise.”

The Washington Post has a nice explainer on how text to image generators work: How AI image generators work, like DALL-E, Lensa and stable diffusion. They let you play with the generator, though you have to stick with the predefined phrases. What I hadn’t realized was the role of static noise in the diffusion model. Not sure how it works, but it seems to train the AI to recognize and then generate in noisy images.