Can A.I. Be Blamed for a Teen’s Suicide?

The New York Times has a story about youth who committed suicide after extended interactions with a character on Character.ai. The story, Can A.I. Be Blamed for a Teen’s Suicide? describes how Sewell Setzer III has long discussions with a character called Daenerys Targaryen from the Game of Thrones series. He became isolated and got attached to Daenerys. He eventually shot himself and now his mother is suing Character.ai.

Here is an example of what he wrote in his journal,

I like staying in my room so much because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany and much more in love with her, and just happier.

The suit claims that Character.ai’s product was untested, dangerous and defective. It remains to be seen if these types of suits will succeed. In the meantime we need to be careful with these social AIs.

The 18th Annual Hurtig Lecture 2024: Canada’s Role in Shaping our AI Future

The video for the 2024 Hurtig Lecture is up. The speaker was Dr. Elissa Strome, Executive Director of the Pan-Canadian AI Strategy. She gave an excellent overview of the AI Strategy here in Canada and ended by discussing some of the challenges.

The Hurtig Lecture was organized by my colleague Dr. Yasmeen Abu-Laban. I got to moderate the panel discussion and Q & A after the lecture.

UNESCO – Artificial Intelligence for Information Accessibility (AI4IA) Conference

Yesterday I organized a satellite panel for the UNESCO – Artificial Intelligence for Information Accessibility (AI4IA) Conference. This full conference takes place on GatherTown, a conferencing system that feels like an 8-bit 80s game. You wander around our AI4IA conference space and talk with others who are close and watch short prerecorded video talks of which there are about 60. I’m proud that Amii and the University of Alberta provided the technical support and funding to make the conference possible. The videos will also be up on YouTube for those who don’t make the conference.

The event we organized at the University of Alberta on Friday was an online panel on What is Responsible in Responsible Artificial Intelligence with Bettina Berendt, Florence Chee, Tugba Yoldas, and Katrina Ingram.

Bettina Berendt looked at what the Canadian approach to responsible AI could be and how it might be short sighted. She talked about a project that, like a translator, lets a person “translate” their writing in whistleblowing situations into prose that won’t identify them. It helps you remove the personal identifiable signal from the text. She then pointed out how this might be responsible, but might also lead to problems.

Florence Chee talked about how responsibility and ethics should be a starting point rather than an afterthought.

Tugba Yoldas talked about how meaningful human control is important to responsible AI and what it takes for there to be control.

Katrina Ingram of Ethically Aligned AI nicely wrapped up the short talks by discussing how she advises organizations that want to weave ethics into their work. She talked about the 4 Cs: Context, Culture, Content, and Commitment.

 

System Prompts – Anthropic

From a story on Tech Crunch it seems that Anthropic has made their system prompts public. See System Prompts – Anthropic. For example, the system prompt for Claude 3.5 Sonnet starts with,

<claude_info> The assistant is Claude, created by Anthropic. The current date is {}. Claude’s knowledge base was last updated on April 2024.

These system prompts are fascinating since they describe how Anthropic hopes Claude will behave. A set of commandments, if you will. Anthropic describes the purpose of the system prompts thus:

Claude’s web interface (Claude.ai) and mobile apps use a system prompt to provide up-to-date information, such as the current date, to Claude at the start of every conversation. We also use the system prompt to encourage certain behaviors, such as always providing code snippets in Markdown. We periodically update this prompt as we continue to improve Claude’s responses. These system prompt updates do not apply to the Anthropic API.

South Korea faces deepfake porn ’emergency’

The president has addressed the growing epidemic after Telegram users were found exchanging doctored photos of underage girls.

Once again, deepfake porn is in the news as South Korea faces deepfake porn ’emergency’Teenagers have been posting deepfake porn images of people they know, including minors, on sites like Telegram.

South Korean President Yoon Suk Yeol on Tuesday instructed authorities to “thoroughly investigate and address these digital sex crimes to eradicate them”.

This has gone beyond a rare case in Spain or Winnipeg. In South Korea it has spread to hundreds of schools. Porn is proving to be a major use of AI.

When A.I.’s Output Is a Threat to A.I. Itself

As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.

The New York Times has a terrific article on model collapse, When A.I.’s Output Is a Threat to A.I. Itself. They illustrate what happens when an AI is repeatedly trained on its own output.

Model collapse is likely to become a problem for new generative AI systems trained on the internet which, in turn, is more and more a trash can full of AI generated misinformation. That companies like OpenAI don’t seem to respect the copyright and creativity of others makes is likely that there will be less and less free human data available. (This blog may end up the last source of fresh human text 🙂

The article also has an example of how output can converge and thus lose diversity as it trained on its own output over and over.

Perhaps the biggest takeaway of this research is that high-quality, diverse data is valuable and hard for computers to emulate.

One solution, then, is for A.I. companies to pay for this data instead of scooping it up from the internet, ensuring both human origin and high quality.

Words Used at the Democratic and Republican National Conventions

Counting frequently spoken words and phrases at both events.

The New York Times ran a neat story that used text analysis to visualize the differences between Words Used at the Democratic and Republican National Conventions. They used a number of different visualization including butterfly bar graphs like the one above. They also had a form of word bubbles that I thought was less successful.

In Ukraine War, A.I. Begins Ushering In an Age of Killer Robots

Driven by the war with Russia, many Ukrainian companies are working on a major leap forward in the weaponization of consumer technology.

The New York Times has an important story on how, In Ukraine War, A.I. Begins Ushering In an Age of Killer Robots. In short, the existential threat of the overwhelming Russian attack is creating a situation where Ukraine is developing a home-grown autonomous weapons industry that repurposes consumer technologies. Not only are all sorts of countries testing AI powered weapons in Ukraine, the Ukrainians are weaponizing cheap technologies and, in the process, removing a lot of the guardrails.

The pressure to outthink the enemy, along with huge flows of investment, donations and government contracts, has turned Ukraine into a Silicon Valley for autonomous drones and other weaponry.

There isn’t necessarily any “human in the loop” in the cheap systems they are developing. One wonders how the development of this industry will affect other conflicts. Could we see a proliferation of terrorist drone attacks put together following plans circulating on the internet?

Why the pope has the ears of G7 leaders on the ethics of AI

Pope Francis is leaning on thinking of Paolo Benanti, a friar adept at explaining how technology can change world

The Guardian has some good analysis on Why the pope has the ears of G7 leaders on the ethics of AI | Artificial intelligence (AI). The PM of Italy, Meloni, invited the pope to address the G7 leaders on the issue of AI. I blogged about this here. It is worth pointing out that this is not the first time the Vatican has intervened on the issue of AI ethics. Here is a short timeline:

  • In 2020 a bunch of Catholic organizations and industry heavyweights sign the Rome Call (Call for AI Ethics). The Archbishop of Canterbury just signed.
  • In 2021 they create the RenAIssance Foundation building on the Rome Call. It’s scientific director is Paolo Benati, a charismatic Franciscan monk, professor, and writer on religion and technology. He is apparently advising both Meloni and the pope and he coined the term “algo-ethics”. Most of his publications are in Italian, but there is an interview in English. He is also apparently on the OECD’s expert panel now.
  • 2022 Benati publishes Human in the Loop: Decisioni umane e intelligenze artificiali (Human in the Loop: Human Decisions and Artificial Intelligences) which is about the importance of ethics to AI and the human in ethics.
  • 2024 Meloni invites the Pope to address the G7 leaders gathered in Italy on AI.

Rebind | Read Like Never Before With Experts, AI, & Original Content

Experience the next chapter in reading with Rebind: the first AI-reading platform. Embark on expert-guided journeys through timeless classics.

From a NYTimes story I learned about John Kaag’s new initiative Rebind | Read Like Never Before With Experts, AI, & Original Content. The philosophers Kaag and Clancy Martin have teamed up with an investor to start a company that create AI enhanced “rebindings” of classics. They work with out of copyright book and then pay someone to interpret or comment on the book. The commentary is then used to train an AI with whom you can dialogue as you go through the book. The end result (which I am on the waitlist to try) will be a reading experience enhanced by interpretative videos and chances to interact. It answers Plato’s old critique of text that you can ask questions of it. Now you can.

This reminds me of an experiment by Schwitzgebel, Strasser, and Crosby who created a Daniel Dennett chatbot. Here you can see SChwitzgebel’s reflections on the project.

This project raised ethical issues like whether it was ethical to simulate a living person. In this case they asked for Dennett’s permission and didn’t give people direct access to the chatbot. With the announcements about Apple Intelligence it looks like Apple may provide an AI that is part of the system that will have access to your combined files so as to help with search and to help you talk with yourself. Internal dialogue, of course, is the paradigmatic manifestation of consciousness. Could one import one or two thinkers to have a multi-party dialogue about ones thinking over time … “What do you think Plato; should I write another paper about ethics and technology?”