The Illusion Of AI’s Existential Risk

In sum, AI acting on its own cannot induce human extinction in any of the ways that extinctions have happened in the past. Appeals to the competitive nature of evolution or previous instances of a more intelligent species causing the extinction of a less intelligent species reflect a common mischaracterization of evolution by natural selection.

Could artificial intelligence (AI) soon get to the point where it could enslave us? An Amii colleague sent me to this sensible article, The Illusion Of AI’s Existential Risk that argues that it is extremely unlikely that an AI could evolve to the point where it could manipulate us and prevent us from turning it off. One of the points they make is that the situation is completely different from past extinctions.

Our safety is the topic of Brian Christian’s excellent The Alignment Problem book which talks about different approaches to developing AIs so they are aligned with our values. An important point made by Stuart Russell and quoted in the book is that we don’t want AIs to have the same values as us, we want them to value our having values and to pay attention to our values.

This raises the question of how an AI might know what we value. One approach is Constitutional AI where we train ethical AIs on a constitution that captures our values and then use it to model others.

One of the problems, however, with ethics is that human ethics isn’t simple and may not be something one can capture in a constitution. For this reason another approach is Inverse Reinforcement Learning (IRL) where were ask an AI to infer our values from a mass of evidence of ethical discourse and behaviour.

My guess is that this is what they are trying at OpenAI in their Superalignment project. Imagine an ethical surveillance project that uses IRL to develop a (black) moral box which can be used to train AIs to be aligned. Imagine if it could be tuned to different community ethics?

OpenAI announces Superalignment team

OpenAI has announced a Superalignment team and 4 year project to create an automated alignment researcher. They believe superintelligence (an AI more intelligent than humans) is possible within a decade and therefore we need to accelerate research into alignment. They believe developing an AI alignment researcher that is itself an AGI will give them a way to scale up and “iteratively align superintelligence.” In other words they want to set an AI to aligning more powerful AIs.

Alignment is an approach to AI safety that tries to develop AIs so they act as we would want and expect them to. The idea is to make sure that right out of the box AIs would behave in ways aligned with our values.

Needless to say, there are issues with this approach as this nice Conversation piece by Aaron Snoswell, What is ‘AI alignment’? Silicon Valley’s favourite way to think about AI safety misses the real issues, outlines.

  • First, and importantly, OpenAI has to figure out how to align an AGI so that it can tun the superintelligences to come.
  • You can’t get superalignment without alignment, and we don’t really know what that is or how to get it. There isn’t consensus as to what our values should be so an alignment would have to be to some particular ethical position.
  • Why is OpenAI focusing only on superalignment? Why not try a number of the approaches from promoting regulation to developing more ethical training datasets? How can they be so sure about one approach? What do they know that we don’t? Or … what do they think they know?
  • Snoswell believes we should start by “acknowledging and addressing existing harms”. There are plenty of immediate difficult problems that should be addressed rather than “kicking the meta-ethical can one block down the road, and hoping we don’t trip over it later on.”
  • Technical safety isn’t a problem that can be solved. It is an ongoing process of testing and refining as this Tweet from Yann LeCunn puts it.

Anyway, I wish them well. No doubt interesting research will come out of this initiative which I hope OpenAI will share. In the meantime the rest of us can carry on with the boring safety research.

OpenAI adds Code Interpreter to ChatGPT Plus

Upload datasets, generate reports, and download them in seconds!

OpenAI has just released a plug-in called Code Interpreter which is truly impressive. You need to have ChatGPT Plus to be able to turn it on. It then allows you to upload data and to use plain English to analyze it. You write requests/prompts like:

What are the top 20 content words in this text?

It then interprets your request and describes what it will try to do in Python. Then it generates the Python and runs it. When it has finished, it shows the results. You can see examples in this Medium article: 

ChatGPT’s Code Interpreter Was Just Released. Here’s How It Will Change Data Science Forever

I’ve been trying to see how I can use it to analyze a text. Here are some of the limitations:

  • It can’t handle large texts. This can be used to study a book length text, not a collection of books.
  • It frequently tries to load NLTK or other libraries and then fails. What is interesting is that it then tries other ways of achieving the same goal. For example, I asked for adjectives near the word “nature” and when it couldn’t load the NLTK POS library it then accessed a list of top adjectives in English and searched for those.
  • It can generate graphs of different sorts, but not interactives.
  • It is difficult to get the full transcript of an experiment where by “full” I mean that I want the Python code, the prompts, the responses, and any graphs generated. You can ask for a iPython notebook with the code which you can download. Perhaps I can also get a PDF with the images.

The Code Interpreter is in beta so I expect they will be improving it. It is none the less very impressive how it can translate prompts into processes. Particularly impressive is how it tries different approaches when things fail.

Code Interpreter could make data analysis and manipulation much more accessible. Without learning to code you can interrogate a data set and potentially run other processes. It is possible to imagine an unshackled Code Interpreter that could access the internet and do all sorts of things (like running a paper-clip business.)

‘It was as if my father were actually texting me’: grief in the age of AI

People are turning to chatbot impersonations of lost loved ones to help them grieve. Will AI help us live after we’re dead?

The Guardian has a thorough story about the use of AI to evoke the dead, ‘It was as if my father were actually texting me’: grief in the age of AI. The story talks about how one can train an artificial intelligence on past correspondence to mimic someone who passed. One can imagine academic uses of this where we create clones of historical figures with which to converse. Do we have enough David Hume to create an interesting AI agent?

For all the advances in medicine and technology in recent centuries, the finality of death has never been in dispute. But over the past few months, there has been a surge in the number of people sharing their stories of using ChatGPT to help say goodbye to loved ones. They raise serious questions about the rights of the deceased, and what it means to die. Is Henle’s AI mother a version of the real person? Do we have the right to prevent AI from approximating our personalities after we’re gone? If the living feel comforted by the words of an AI bot impersonation – is that person in some way still alive?

The article mentions some of the ethical quandaries:

  • Do dead people have rights? Or do others have rights related to a dead person’s image, voice, and pattern of conversation?
  • Is it healthy to interact with an AI revivification of a close relative?

 

The case for taking AI seriously as a threat to humanity

From the Open Philanthropy site I came across this older (2020) Vox article, The case for taking AI seriously as a threat to humanity by Kelsey Piper. The article nicely summarizes some of the history of concerns around AGI (Artificial General Intelligence) as people tend to call an AI so advanced it might be comparable to human intelligence. This history goes back to Turing’s colleague I.J. Good who speculated in 1965 that,

An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

Such an explosion has been called the Singularity by Vernor Vinge and was popularized by Ray Kurzweil.

I came across this following threads on the whole issue of whether AI would soon become an existential threat. The question of the dangers of AI (whether AGI (Artificial General Intelligence) or just narrow AI) has gotten a lot of attention especially since Geoffrey Hinton ended his relationship with Google so he could speak about it. He and other signed a short statement published on the site of the Center for AI Safety,

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

The existential question only become relevant if one believes, as many do, that there is considerable risk that AI research and development is moving so fast that it may soon achieve some level of generality at which point such an AGI could begin act in unpredictable and dangerous ways. Alternatively people could misuse such powerful AGIs to harm us. Open Philanthropy is one group that is focused on Potential Risks form Advanced AI. They could be classed as an organization with a longtermism view, a view that it is important to ethics (and philanthropy) to consider long-term issues.

Advances in AI could lead to extremely positive developments, but could also potentially pose risks from intentional misuse or catastrophic accidents.

Others have called for a Manhattan Project for AI Safety. There are, of course, those (including me) that feel that this is distracting from the immediate unintended effects of AI and/or that there is little existential danger for the moment as AGI is decades off. The cynic in my also wonders how much the distraction is intentional as it both hypes the technology (its dangerous therefore it must be important) or justifies ignoring the stubborn immediate problems like racist bias in the training data.

Kelsey Piper has in the meantime published A Field Guide to AI Safety.

The question still remains whether AI is dangerous enough to merit the sort of ethical attention that nuclear power, for example, has recieved.

Bridging Divides – Research and Innovation

Thanks to my colleague Yasmeen, I was included in an important CFREF, Bridging Divides – Research and Innovation led by Anna Triandafyllidou at Toronto Metropolitan University. Some of the topics I hope to work on include how information technology is being used to surveil and manage immigrants. Conversely, how immigrants use information technology.

Statement on AI Risk

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

The Center for AI Safety has issued a very short Statement on AI Risk (see sentence above.) This has been signed by the likes of Yoshua Bengio and Geoffrey Hinton. I’m not sure if it is an alternative to the much longer Open Letter, but it focuses on the warning without any prescription as to what we should do. The Open Letter was criticized many in the AI community, so perhaps CAIS was trying to find wording that could bring together “AI Scientists” and “Other Notable Figures.”

I personally find this alarmist. I find myself less and less impressed with ChatGPT as it continues to fabricate answers of little use (because they are false.) I tend to agree with Elizabeth Renieris who is quoted in this BBC story on Artificial intelligence could lead to extinction, experts warn to the effect that there are a lot more pressing immediate issues with AI to worry about. She says,

“Advancements in AI will magnify the scale of automated decision-making that is biased, discriminatory, exclusionary or otherwise unfair while also being inscrutable and incontestable,” she said. They would “drive an exponential increase in the volume and spread of misinformation, thereby fracturing reality and eroding the public trust, and drive further inequality, particularly for those who remain on the wrong side of the digital divide”.

All the concern about extinction has me wondering if this isn’t a way of hyping AI to make everyone one and every AI business more important. If there is an existential risk then it must be a priority, and if it is a priority then we should be investing in it because, of course, the Chinese are. (Note that the Chinese have actually presented draft regulations that they will probably enforce.) In other words, the drama of extinction could serve the big AI companies like OpenAI, Microsoft, Google, and Meta in various ways:

  • The drama could convince people that there is real disruptive potential in AI so they should invest now! Get in before it is too late.
  • The drama could lead to regulation which would actually help the big AI companies as they have the capacity to manage regulation in ways that small startups don’t. The big will get bigger with regulation.

I should stress that this is speculation. I probably shouldn’t be so cynical. Instead lets look to what we can do locally.

Auto-GPT

An experimental open-source attempt to make GPT-4 fully autonomous. – Auto-GPT/README.md at master · Torantulino/Auto-GPT

From a video on 3 Quarks Daily on whether ChatGPT can prompt itself I discovered, Auto-GPT. Auto-GPT is powered by GPT-4. You can describe a mission and it will try to launch tasks, assess them, and complete the mission. Needless to say it was inevitable that someone would find a way to use ChatGPT or one of its relatives to try to complete complicated jobs including taking over the world, as Chaos-GPT claims to want to do (using Auto-GPT.)

How long will it be before someone figures out how to use these tools to do something truly nasty? I give it about 6 months before we get stories of generative AI being used to systematically harass people, or find information on how to harm people, or find ways to waste resources like the paperclip maximizer. Is it surprising that governments like Italy have banned ChatGPT?

 

U of A computing scientists work with Japanese researchers on virtual reality game to get people out of their seats

U of A computing scientists work with Japanese researchers to refine a virtual and mixed reality video game that can improve motor skills for older adults and sedentary people.

The Folio of the University of Alberta published a story about a trip to Japan that I and others embarked on, U of A computing scientists work with Japanese researchers on virtual reality game to get people out of their seats. Ritsumeikan invited us to develop research collaborations around gaming, language and artificial intelligence. Our visit was a chance to further the collaborations, like the one my colleagues Eleni Stroulia and Victor Fernandez Cervantes are developing with Thawmas Ruck around games for older adults. This inter-university set of collaborations build on projects I was involved in going back to 2011, including a conference (Replaying Japan) and a journal, the Journal of Replaying Japan.

The highlight was the signing of a Memorandum Of Understanding by the two presidents (of U of A and Ritsumeikan). I was also involved as was Professor Nakamura. May the collaboration thrive.

2023 Annual Public Lecture in Philosophy

Last week I gave the 2023 Annual Public Lecture in Philosophy. You can Watch a Recording here. The talk was on The Eliza Effect: Data Ethics for Machine Learning.

I started the talk with the case of Kevin Roose’s interaction with Sydney (Microsoft’s name for Bing Chat) where it ended up telling Roose that it loved him. From there I discussed some of the reasons we should be concerned with the latest generation of chatbots. I then looked at the ethics of LAION-5B as an example of how we can audit the ethics of projects. I ended with some reflections on what an ethics of AI could be.