Isabel Pedersen is giving the Stéfan Sinclair lecture at Concordia on Create Me, Break Me, Remember Me: Art and AI in the Age of Reinvention. Among other things she talked about the project Fabric of Digital Life which documents over 5000 augmentation projects/tools/platforms. This is a fascinating database.
The Gamergate Social Network: Interpreting Transphobia and Alt-Right Hate Online
Catherine Bevan led the writing of a paper that just got published in Digital Studies, The Gamergate Social Network: Interpreting Transphobia and Alt-Right Hate Online. The paper explores transphobia in the gamergate controversy through a social network analysis. Catherine did a lot of work hand tagging events and then visualizing them.
How safe is AI safety?
Today I gave a plenary talk on “How Safe is AI Safety?” to open a Workshop on AI and DH (Part 1) organized by the Centre de recherche interuniversitaire sur les humanités numériques (CRIHN) at the Université de Montréal.
In the paper I looked at how AI safety is being implemented in Canada and what is the scope of the idea. I talked about the shift from Responsible AI to AI Safety in the Canadian government’s rhetoric.
I’m trying to figure out what to call the methodology I have developed for this and other research excursions. It has elements of Foucault’s geneaology of ideas – trying to follow ideas that are obvious through the ways the ideas are structured in institutions. Or, it is an extension of Ian Hacking’s idea of historical ontology where we try to understand ideas about things through their history.
Metaculus on AGI Outcomes
Listening to Jacob Steinhardt on The Hinton Lectures™ I learned about Metaculus, which is a forecasting service which is a public benefit company. It has a focus area on AI Progress with lots of AI related forecasts, (which seems to be a huge area of interest.) This service coordinates human forecasts and builds infrastructure to facilitate others in forecasting.
Neat!
Claudette – An Automated Detector of Potentially Unfair Clauses in Online Terms of Service
Randy Goebel gave a great presentation on the use of AI in Judicial Decision Making on Friday to my AI Ethics course. He showed us an example tool called Claudette which can be used to identify potentially unfair clauses in a Terms and Conditions document. You can try it here at the dedicated web site here.
Why is this useful? It provides a form of summary of a document none of us read that could help us catch problematic clauses. It could help us be more careful users of applications.
Can A.I. Be Blamed for a Teen’s Suicide?
The New York Times has a story about youth who committed suicide after extended interactions with a character on Character.ai. The story, Can A.I. Be Blamed for a Teen’s Suicide? describes how Sewell Setzer III has long discussions with a character called Daenerys Targaryen from the Game of Thrones series. He became isolated and got attached to Daenerys. He eventually shot himself and now his mother is suing Character.ai.
Here is an example of what he wrote in his journal,
I like staying in my room so much because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany and much more in love with her, and just happier.
The suit claims that Character.ai’s product was untested, dangerous and defective. It remains to be seen if these types of suits will succeed. In the meantime we need to be careful with these social AIs.
The 18th Annual Hurtig Lecture 2024: Canada’s Role in Shaping our AI Future
The video for the 2024 Hurtig Lecture is up. The speaker was Dr. Elissa Strome, Executive Director of the Pan-Canadian AI Strategy. She gave an excellent overview of the AI Strategy here in Canada and ended by discussing some of the challenges.
The Hurtig Lecture was organized by my colleague Dr. Yasmeen Abu-Laban. I got to moderate the panel discussion and Q & A after the lecture.
Dario Amodei: Machines of Loving Grace
Dario Amodei of Anthropic fame has published a long essay on AI titled Machines of Loving Grace: How AI Could Transform the World for Better. In the essay he talks about how he doesn’t like the term AGI and prefers to instead talk about “powerful AI” and he provides a set of characteristics he considers important, including the ability to work on issues in sustained fashion over time.
Amodei also doesn’t worry much about the Singularity as he believes powerful AI will still have to deal with real world problems when designing more powerful AI like building physical systems. I tend to agree.
The point of the essay is, however, to focus on five categories of positive applications of AI that are possible:
- Biology and physical health
- Neuroscience and mental health
- Economic development and poverty
- Peace and governance
- Work and meaning
The essay is long, so I won’t go into detail. What is important is that he articulates a set of positive goals that AI could help with in these categories. He calls his vision both radical and obvious. In a sense he is right – we have stopped trying to imagine a better world through technology, whether out of cynicism or attention only to details.
Throughout writing this essay I noticed an interesting tension. In one sense the vision laid out here is extremely radical: it is not what almost anyone expects to happen in the next decade, and will likely strike many as an absurd fantasy. Some may not even consider it desirable; it embodies values and political choices that not everyone will agree with. But at the same time there is something blindingly obvious—something overdetermined—about it, as if many different attempts to envision a good world inevitably lead roughly here.
UNESCO – Artificial Intelligence for Information Accessibility (AI4IA) Conference
Yesterday I organized a satellite panel for the UNESCO – Artificial Intelligence for Information Accessibility (AI4IA) Conference. This full conference takes place on GatherTown, a conferencing system that feels like an 8-bit 80s game. You wander around our AI4IA conference space and talk with others who are close and watch short prerecorded video talks of which there are about 60. I’m proud that Amii and the University of Alberta provided the technical support and funding to make the conference possible. The videos will also be up on YouTube for those who don’t make the conference.
The event we organized at the University of Alberta on Friday was an online panel on What is Responsible in Responsible Artificial Intelligence with Bettina Berendt, Florence Chee, Tugba Yoldas, and Katrina Ingram.
Bettina Berendt looked at what the Canadian approach to responsible AI could be and how it might be short sighted. She talked about a project that, like a translator, lets a person “translate” their writing in whistleblowing situations into prose that won’t identify them. It helps you remove the personal identifiable signal from the text. She then pointed out how this might be responsible, but might also lead to problems.
Florence Chee talked about how responsibility and ethics should be a starting point rather than an afterthought.
Tugba Yoldas talked about how meaningful human control is important to responsible AI and what it takes for there to be control.
Katrina Ingram of Ethically Aligned AI nicely wrapped up the short talks by discussing how she advises organizations that want to weave ethics into their work. She talked about the 4 Cs: Context, Culture, Content, and Commitment.
ASBA Releases Artificial Intelligence Policy Guidance for K-12 Education – Alberta School Boards Association
Alberta School Boards Association (ASBA) is pleased to announce the release of its Artificial Intelligence Policy Guidance. As Artificial Intelligence (AI) continues to shape the future of education, ASBA has […]
The ASBA Releases Artificial Intelligence Policy Guidance for K-12 Education – Alberta School Boards Association. This 14 page Policy document is clear and useful without being proscriptive. It could be a model for other educational organizations. (Note that it was authored by someone I supervised.)