UNESCO – Artificial Intelligence for Information Accessibility (AI4IA) Conference

Yesterday I organized a satellite panel for the UNESCO – Artificial Intelligence for Information Accessibility (AI4IA) Conference. This full conference takes place on GatherTown, a conferencing system that feels like an 8-bit 80s game. You wander around our AI4IA conference space and talk with others who are close and watch short prerecorded video talks of which there are about 60. I’m proud that Amii and the University of Alberta provided the technical support and funding to make the conference possible. The videos will also be up on YouTube for those who don’t make the conference.

The event we organized at the University of Alberta on Friday was an online panel on What is Responsible in Responsible Artificial Intelligence with Bettina Berendt, Florence Chee, Tugba Yoldas, and Katrina Ingram.

Bettina Berendt looked at what the Canadian approach to responsible AI could be and how it might be short sighted. She talked about a project that, like a translator, lets a person “translate” their writing in whistleblowing situations into prose that won’t identify them. It helps you remove the personal identifiable signal from the text. She then pointed out how this might be responsible, but might also lead to problems.

Florence Chee talked about how responsibility and ethics should be a starting point rather than an afterthought.

Tugba Yoldas talked about how meaningful human control is important to responsible AI and what it takes for there to be control.

Katrina Ingram of Ethically Aligned AI nicely wrapped up the short talks by discussing how she advises organizations that want to weave ethics into their work. She talked about the 4 Cs: Context, Culture, Content, and Commitment.

 

System Prompts – Anthropic

From a story on Tech Crunch it seems that Anthropic has made their system prompts public. See System Prompts – Anthropic. For example, the system prompt for Claude 3.5 Sonnet starts with,

<claude_info> The assistant is Claude, created by Anthropic. The current date is {}. Claude’s knowledge base was last updated on April 2024.

These system prompts are fascinating since they describe how Anthropic hopes Claude will behave. A set of commandments, if you will. Anthropic describes the purpose of the system prompts thus:

Claude’s web interface (Claude.ai) and mobile apps use a system prompt to provide up-to-date information, such as the current date, to Claude at the start of every conversation. We also use the system prompt to encourage certain behaviors, such as always providing code snippets in Markdown. We periodically update this prompt as we continue to improve Claude’s responses. These system prompt updates do not apply to the Anthropic API.

In Ukraine War, A.I. Begins Ushering In an Age of Killer Robots

Driven by the war with Russia, many Ukrainian companies are working on a major leap forward in the weaponization of consumer technology.

The New York Times has an important story on how, In Ukraine War, A.I. Begins Ushering In an Age of Killer Robots. In short, the existential threat of the overwhelming Russian attack is creating a situation where Ukraine is developing a home-grown autonomous weapons industry that repurposes consumer technologies. Not only are all sorts of countries testing AI powered weapons in Ukraine, the Ukrainians are weaponizing cheap technologies and, in the process, removing a lot of the guardrails.

The pressure to outthink the enemy, along with huge flows of investment, donations and government contracts, has turned Ukraine into a Silicon Valley for autonomous drones and other weaponry.

There isn’t necessarily any “human in the loop” in the cheap systems they are developing. One wonders how the development of this industry will affect other conflicts. Could we see a proliferation of terrorist drone attacks put together following plans circulating on the internet?

ChatGPT is Bullshit.

The Hallucination Lie

Ignacio de Gregorio has a nice Medium essay about why ChatGPT is bullshit. The essay is essentially a short and accessible version of an academic article by Hicks, M. T., et al. (2024), ChatGPT is bullshit. They make the case that people make decisions based on their understanding about what LLMs are doing and that “hallucination” is the wrong word because ChatGPT is not misperceiving the way a human would. Instead they need to understand that LLMs are designed with no regard for the truth and are therefore bullshitting.

Because these programs cannot themselves be concerned with truth, and because they are designed to produce
text that looks truth-apt without any actual concern for truth,
it seems appropriate to call their outputs bullshit. (p. 1)

Given this process, it’s not surprising that LLMs have a
problem with the truth. Their goal is to provide a normal-
seeming response to a prompt, not to convey information
that is helpful to their interlocutor. (p. 2)

At the end the authors make the case that if we adopt Dennett’s intentional stance then we would do well to attribute to ChatGPT the intentions of a hard bullshitter as that would allow us to better diagnose what it was doing. There is also a discussion of the intentions of the developers. You could say that they made available a tool that bullshitted without care for the truth.

Are we, as a society, at risk of being led by these LLMs and their constant use, to confuse the simulacra “truthiness” for true knowledge?

 

Why the pope has the ears of G7 leaders on the ethics of AI

Pope Francis is leaning on thinking of Paolo Benanti, a friar adept at explaining how technology can change world

The Guardian has some good analysis on Why the pope has the ears of G7 leaders on the ethics of AI | Artificial intelligence (AI). The PM of Italy, Meloni, invited the pope to address the G7 leaders on the issue of AI. I blogged about this here. It is worth pointing out that this is not the first time the Vatican has intervened on the issue of AI ethics. Here is a short timeline:

  • In 2020 a bunch of Catholic organizations and industry heavyweights sign the Rome Call (Call for AI Ethics). The Archbishop of Canterbury just signed.
  • In 2021 they create the RenAIssance Foundation building on the Rome Call. It’s scientific director is Paolo Benati, a charismatic Franciscan monk, professor, and writer on religion and technology. He is apparently advising both Meloni and the pope and he coined the term “algo-ethics”. Most of his publications are in Italian, but there is an interview in English. He is also apparently on the OECD’s expert panel now.
  • 2022 Benati publishes Human in the Loop: Decisioni umane e intelligenze artificiali (Human in the Loop: Human Decisions and Artificial Intelligences) which is about the importance of ethics to AI and the human in ethics.
  • 2024 Meloni invites the Pope to address the G7 leaders gathered in Italy on AI.

Rebind | Read Like Never Before With Experts, AI, & Original Content

Experience the next chapter in reading with Rebind: the first AI-reading platform. Embark on expert-guided journeys through timeless classics.

From a NYTimes story I learned about John Kaag’s new initiative Rebind | Read Like Never Before With Experts, AI, & Original Content. The philosophers Kaag and Clancy Martin have teamed up with an investor to start a company that create AI enhanced “rebindings” of classics. They work with out of copyright book and then pay someone to interpret or comment on the book. The commentary is then used to train an AI with whom you can dialogue as you go through the book. The end result (which I am on the waitlist to try) will be a reading experience enhanced by interpretative videos and chances to interact. It answers Plato’s old critique of text that you can ask questions of it. Now you can.

This reminds me of an experiment by Schwitzgebel, Strasser, and Crosby who created a Daniel Dennett chatbot. Here you can see SChwitzgebel’s reflections on the project.

This project raised ethical issues like whether it was ethical to simulate a living person. In this case they asked for Dennett’s permission and didn’t give people direct access to the chatbot. With the announcements about Apple Intelligence it looks like Apple may provide an AI that is part of the system that will have access to your combined files so as to help with search and to help you talk with yourself. Internal dialogue, of course, is the paradigmatic manifestation of consciousness. Could one import one or two thinkers to have a multi-party dialogue about ones thinking over time … “What do you think Plato; should I write another paper about ethics and technology?”

Pope to G7: AI is ‘neither objective nor neutral’

Vatican News has a full report on the Pope’s address to the G7 leaders on Artificial Intelligence. In the address the Pope called AI “a true cognitive-industrial revolution” that could lead to “complex epochal transformations”. The full address is available (in various translations) here.

After all, we cannot doubt that the advent of artificial intelligence represents a true cognitive-industrial revolution, which will contribute to the creation of a new social system characterised by complex epochal transformations. For example, artificial intelligence could enable a democratization of access to knowledge, the exponential advancement of scientific research and the possibility of giving demanding and arduous work to machines. Yet at the same time, it could bring with it a greater injustice between advanced and developing nations or between dominant and oppressed social classes, raising the dangerous possibility that a “throwaway culture” be preferred to a “culture of encounter”.

Partecipazione del Santo Padre Francesco al G7 a Borgo Egnazia, 14.06.2024

The Pope makes a number of interesting points starting with a point about how tool making is a looking outward to the environment – a techno-human condition that is part of being human. It is a particular form of openness to the environment that can lead to good or be corrupted which is why ethics are important. “To speak of technology is to speak of what it means to be human and thus of our singular status as beings who possess both freedom and responsibility. This means speaking about ethics.”

The Pope also makes a point that I think Lyotard made in The Postmodern Condition, namely that datafication is limiting our ideas about what knowledge could be. AI could go further and limit our ideas about what it is to think at all. As the Pope says, “We cannot, therefore, conceal the concrete risk, inherent in its fundamental design, that artificial intelligence might limit our worldview to realities expressible in numbers and enclosed in predetermined categories, thereby excluding the contribution of other forms of truth and imposing uniform anthropological, socio-economic and cultural models.”

The Pope concludes by reminding us that we cannot avoid politics and that what we need is a healthy politics capable of creating “the conditions for such good use [of AI] to be possible and fruitful.”

Media Monitoring of the Past · impresso

Leveraging an unprecedented corpus of newspaper and radio archives, **Impresso – Media Monitoring of the Past** is an interdisciplinary research project that uses machine learning to pursue a paradigm shift in the processing, semantic enrichment, representation, exploration, and study of historical media across modalities, time, languages, and national borders.

I just learned about the Swiss project  Impresso: Media Monitoring of the Past. This project has an impressive Web application that lets you search across 76 newspapers in two languages from two countries.

Key to the larger project is using machine learning to handle multiple modalities like:

  • News text and radio broadcasts
  • Text and Images
  • French and German
  • Different countries

A Data Lab that uses IPython is coming soon. They also have documentation about a Topic Modelling tool, but I couldn’t find the actual tool.

Anyway, this strikes me as an example of an advanced multi-modal news research environment.

 

Canadian AI 2024 Conference

I’m at the Canadian AI 2024 Conference where I will be on a panel about “The Future of Responsible AI and AI for Social Good in Canada” on Thursday. This panel is timely given that we seem to be seeing a sea-change in AI regulation. If initially there was a lot of talk about the dangers (to innovation) of regulation, we now have large players like China, the US and the EU introducing regulations.

  • President Biden has issued an “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” It is unlikely that Biden can get legislation through Congress so he is issuing executive orders.
  • The EU recently passed their AI Act which is risk-based.
  • The Canadian AI and Data Act (AIDA) is coming and is similarly risk-based.

In light of AIDA I would imagine that the short-term future for Responsible AI in Canada might include the following:

  • Debate about AIDA and amendments to align it with other jurisdictions and to respond to industry concerns. Will there be a more inclusive consultation?
  • Attempts to better define what are high-impact AIs so as to better anticipate what will need onerous documentation and assessment.
  • Elaboration of how to best run an impact assessment.
  • Discussions around responsibility and how to assign it in different situations.

I hope there will also be a critical exploration of the assumptions and limits of responsible AI.

OpenAI Board Forms Safety and Security Committee

OpenAI has announced the formation of a Safety and Security Committee. See OpenAI Board Forms Safety and Security Committee. This is after Ilya Sutskever and Jan Leike, who were the co-leads of the SuperAlignment project left.

What is notable is that this Committee is an offshoot of the board and has 4 board members on it, including Sam Altman. It sounds to me that the board will keep it on a short leash. I doubt it will have the independence to stand up to the board.

I also note that OpenAI’s ethics is no longer about “superalignment”, but is now about safety and security. With Sutskever and Leike gone they are no longer trying to build an AI to align superintelligences. OpenAI claims that their mission is “to ensure that artificial general intelligence benefits all of humanity” but how will they ensure such beneficence? They no longer have any like an open ethics strategy. Its now just safety and security.

I should add that OpenAI shared a safety update as part of the AI Seoul Summit.