Pope to G7: AI is ‘neither objective nor neutral’

Vatican News has a full report on the Pope’s address to the G7 leaders on Artificial Intelligence. In the address the Pope called AI “a true cognitive-industrial revolution” that could lead to “complex epochal transformations”. The full address is available (in various translations) here.

After all, we cannot doubt that the advent of artificial intelligence represents a true cognitive-industrial revolution, which will contribute to the creation of a new social system characterised by complex epochal transformations. For example, artificial intelligence could enable a democratization of access to knowledge, the exponential advancement of scientific research and the possibility of giving demanding and arduous work to machines. Yet at the same time, it could bring with it a greater injustice between advanced and developing nations or between dominant and oppressed social classes, raising the dangerous possibility that a “throwaway culture” be preferred to a “culture of encounter”.

Partecipazione del Santo Padre Francesco al G7 a Borgo Egnazia, 14.06.2024

The Pope makes a number of interesting points starting with a point about how tool making is a looking outward to the environment – a techno-human condition that is part of being human. It is a particular form of openness to the environment that can lead to good or be corrupted which is why ethics are important. “To speak of technology is to speak of what it means to be human and thus of our singular status as beings who possess both freedom and responsibility. This means speaking about ethics.”

The Pope also makes a point that I think Lyotard made in The Postmodern Condition, namely that datafication is limiting our ideas about what knowledge could be. AI could go further and limit our ideas about what it is to think at all. As the Pope says, “We cannot, therefore, conceal the concrete risk, inherent in its fundamental design, that artificial intelligence might limit our worldview to realities expressible in numbers and enclosed in predetermined categories, thereby excluding the contribution of other forms of truth and imposing uniform anthropological, socio-economic and cultural models.”

The Pope concludes by reminding us that we cannot avoid politics and that what we need is a healthy politics capable of creating “the conditions for such good use [of AI] to be possible and fruitful.”

Media Monitoring of the Past · impresso

Leveraging an unprecedented corpus of newspaper and radio archives, **Impresso – Media Monitoring of the Past** is an interdisciplinary research project that uses machine learning to pursue a paradigm shift in the processing, semantic enrichment, representation, exploration, and study of historical media across modalities, time, languages, and national borders.

I just learned about the Swiss project  Impresso: Media Monitoring of the Past. This project has an impressive Web application that lets you search across 76 newspapers in two languages from two countries.

Key to the larger project is using machine learning to handle multiple modalities like:

  • News text and radio broadcasts
  • Text and Images
  • French and German
  • Different countries

A Data Lab that uses IPython is coming soon. They also have documentation about a Topic Modelling tool, but I couldn’t find the actual tool.

Anyway, this strikes me as an example of an advanced multi-modal news research environment.

 

Canadian AI 2024 Conference

I’m at the Canadian AI 2024 Conference where I will be on a panel about “The Future of Responsible AI and AI for Social Good in Canada” on Thursday. This panel is timely given that we seem to be seeing a sea-change in AI regulation. If initially there was a lot of talk about the dangers (to innovation) of regulation, we now have large players like China, the US and the EU introducing regulations.

  • President Biden has issued an “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” It is unlikely that Biden can get legislation through Congress so he is issuing executive orders.
  • The EU recently passed their AI Act which is risk-based.
  • The Canadian AI and Data Act (AIDA) is coming and is similarly risk-based.

In light of AIDA I would imagine that the short-term future for Responsible AI in Canada might include the following:

  • Debate about AIDA and amendments to align it with other jurisdictions and to respond to industry concerns. Will there be a more inclusive consultation?
  • Attempts to better define what are high-impact AIs so as to better anticipate what will need onerous documentation and assessment.
  • Elaboration of how to best run an impact assessment.
  • Discussions around responsibility and how to assign it in different situations.

I hope there will also be a critical exploration of the assumptions and limits of responsible AI.

OpenAI Board Forms Safety and Security Committee

OpenAI has announced the formation of a Safety and Security Committee. See OpenAI Board Forms Safety and Security Committee. This is after Ilya Sutskever and Jan Leike, who were the co-leads of the SuperAlignment project left.

What is notable is that this Committee is an offshoot of the board and has 4 board members on it, including Sam Altman. It sounds to me that the board will keep it on a short leash. I doubt it will have the independence to stand up to the board.

I also note that OpenAI’s ethics is no longer about “superalignment”, but is now about safety and security. With Sutskever and Leike gone they are no longer trying to build an AI to align superintelligences. OpenAI claims that their mission is “to ensure that artificial general intelligence benefits all of humanity” but how will they ensure such beneficence? They no longer have any like an open ethics strategy. Its now just safety and security.

I should add that OpenAI shared a safety update as part of the AI Seoul Summit.

Securing Canada’s AI advantage | Prime Minister of Canada

AI is already unlocking massive growth in industries across the economy. Many Canadians are already feeling the benefits of using AI to work smarter and faster.

The Prime Minister’s office has just announced a large investment in AI. See Securing Canada’s AI advantage | Prime Minister of Canada. This is a pre-budget announcement of $2.4 billion going to AI related things including:

  • 2 billion “to build and provide access to computing capabilities and technological infrastructure for Canada’s world-leading AI researchers, start-ups, and scale-ups”
  • Setting up a “Canadian AI Safety Institute” with $50 million “to further the safe development and deployment of AI”. This sounds like a security rather than ethics institute as it will “help Canada better understand and protect against the risks of advanced or nefarious AI systems, including to specific communities.”
  • Funding for the “enforcement of the Artificial Intelligence and Data Act, with $5.1 million for the Office of the AI and Data Commissioner.”

There are also funds for startups, workers, and businesses.

The massive funding for infrastructure follows a weekend opinion piece in the Globe and Mail (March 21, 2024) on Canada’s AI infrastructure does not compute. The article suggests we have a lot of talent, but don’t have the metal. Well … now we are getting some metal.

The Deepfake Porn of Kids and Celebrities That Gets Millions of Views

It astonishes me that society apparently believes that women and girls should accept becoming the subject of demeaning imagery.

The New York Times has an opinion piece by Nicholas Kristof on deepfake porn,  The Deepfake Porn of Kids and Celebrities That Gets Millions of Views. The opinion says what is becoming obvious, that deepfake tools are being used overwhelmingly to create porn of women, whether celebrities, or girls people know. This artificial intelligence technology is not neutral, it is hurtful of a specific group – girls and women.

The article points to some research like a study 2023 State of Deepfakes by Home Security Heroes. Some of the key findings:

  • The number of deepfake videos is exploding (550% from 2019 to 2023)
  • 98% of the deepfake videos are porn
  • 99% of that porn women subjects
  • South Korean women singers and actresses are 53% of those targeted

It only takes about half an hour and almost no money to create a 60 second porn video from a single picture of someone. The ease of use and low cost is making these tools and services mainstream so that any yahoo can do it to his neighbour or schoolmate. It shouldn’t be surprising that we are seeing stories about young women being harassed by schoolmates that create and post deepfake porn. See stories here and here.

One might think this would be easy to stop – that the authorities could easily find and prosecute the creators of tools like ClothOff that lets you undress a girl whose photo you have taken. Alas, no. The companies hide behind false fronts. The Guardian has a podcast about trying to track down who owned or ran ClothOff.

What we don’t talk about is the responsibility of some research projects like LAION who have created open datasets for training text-to-image models that include pornographic images. They know their datasets include porn but speculate that this will help researchers.

You can learn more about deepfakes from AI Heelp!!!

The Power of AI Is In Our Hands. What Do We Need to Know?

The Power of AI Is In Our Hands. What Do We Need to Know?

The New Trail has a great feature story by Lisa Szabo on generative AI, The Power of AI Is In Our Hands. What Do We Need to Know? The story features a number of us at U of Alberta talking about the generative AI tools like ChatGPT. It quotes me talking about art and how I believe we will still want art by humans despite what AIs can generate. Perhaps it would be more accurate to say that we will enjoy and consume both AI generated entertainment and art that we believe was generated by people we know.

The Lives of Literary Characters

The goal of this project is to generate knowledge about the behaviour of literary characters at large scale and make this data openly available to the public. Characters are the scaffolding of great storytelling. This Zooniverse project will allow us to crowdsource data to train AI models to better understand who characters are and what they do within diverse narrative worlds to answer one very big question: why do human beings tell stories?

Today we are going live on Zooinverse with our Citizen Science (crowdsourcing) project, The Lives of Literary Characters. The goal of the project is offer micro-tasks that allow volunteers to annotate literary passages that help annotate training data. It will be interesting to see if we get a decent number of volunteers.

Before setting this up we did some serious reading around the ethics of crowdsourcing as we didn’t want to just exploit readers.

 

OpenAI’s GPT store is already being flooded with AI girlfriend bots

OpenAI’s store rules are already being broken, illustrating that regulating GPTs could be hard to control

From Slashdot I learned about a stroy on how OpenAI’s GPT store is already being flooded with AI girlfriend bots. It isn’t particularly surprising that you can get different girlfriend bots. Nor is it surprising that these would be something you can build in ChatGPT-4. ChatGPT is, afterall, a chatbot. What will be interesting to see is whether these chatbot girlfriends are successful. I would have imagined that men would want pornographic girlfriends and that the market for friends would be more for boyfriends along the lines of what Replika offers.