Metaculus on AGI Outcomes

Listening to Jacob Steinhardt on The Hinton Lectures™ I learned about Metaculus, which is a forecasting service which is a public benefit company. It has a focus area on AI Progress with lots of AI related forecasts, (which seems to be a huge area of interest.) This service coordinates human forecasts and builds infrastructure to facilitate others in forecasting.

Neat!

Claudette – An Automated Detector of Potentially Unfair Clauses in Online Terms of Service

Randy Goebel gave a great presentation on the use of AI in Judicial Decision Making on Friday to my AI Ethics course. He showed us an example tool called Claudette which can be used to identify potentially unfair clauses in a Terms and Conditions document. You can try it here at the dedicated web site here.

Why is this useful? It provides a form of summary of a document none of us read that could help us catch problematic clauses. It could help us be more careful users of applications.

Can A.I. Be Blamed for a Teen’s Suicide?

The New York Times has a story about youth who committed suicide after extended interactions with a character on Character.ai. The story, Can A.I. Be Blamed for a Teen’s Suicide? describes how Sewell Setzer III has long discussions with a character called Daenerys Targaryen from the Game of Thrones series. He became isolated and got attached to Daenerys. He eventually shot himself and now his mother is suing Character.ai.

Here is an example of what he wrote in his journal,

I like staying in my room so much because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany and much more in love with her, and just happier.

The suit claims that Character.ai’s product was untested, dangerous and defective. It remains to be seen if these types of suits will succeed. In the meantime we need to be careful with these social AIs.

The 18th Annual Hurtig Lecture 2024: Canada’s Role in Shaping our AI Future

The video for the 2024 Hurtig Lecture is up. The speaker was Dr. Elissa Strome, Executive Director of the Pan-Canadian AI Strategy. She gave an excellent overview of the AI Strategy here in Canada and ended by discussing some of the challenges.

The Hurtig Lecture was organized by my colleague Dr. Yasmeen Abu-Laban. I got to moderate the panel discussion and Q & A after the lecture.

Dario Amodei: Machines of Loving Grace

Dario Amodei of Anthropic fame has published a long essay on AI titled Machines of Loving Grace: How AI Could Transform the World for Better. In the essay he talks about how he doesn’t like the term AGI and prefers to instead talk about “powerful AI” and he provides a set of characteristics he considers important, including the ability to work on issues in sustained fashion over time.

Amodei also doesn’t worry much about the Singularity as he believes powerful AI will still have to deal with real world problems when designing more powerful AI like building physical systems. I tend to agree.

The point of the essay is, however, to focus on five categories of positive applications of AI that are possible:

  1. Biology and physical health
  2. Neuroscience and mental health
  3. Economic development and poverty
  4. Peace and governance
  5. Work and meaning

The essay is long, so I won’t go into detail. What is important is that he articulates a set of positive goals that AI could help with in these categories. He calls his vision both radical and obvious. In a sense he is right – we have stopped trying to imagine a better world through technology, whether out of cynicism or attention only to details.

Throughout writing this essay I noticed an interesting tension. In one sense the vision laid out here is extremely radical: it is not what almost anyone expects to happen in the next decade, and will likely strike many as an absurd fantasy. Some may not even consider it desirable; it embodies values and political choices that not everyone will agree with. But at the same time there is something blindingly obvious—something overdetermined—about it, as if many different attempts to envision a good world inevitably lead roughly here.