Securing Canada’s AI advantage | Prime Minister of Canada

AI is already unlocking massive growth in industries across the economy. Many Canadians are already feeling the benefits of using AI to work smarter and faster.

The Prime Minister’s office has just announced a large investment in AI. See Securing Canada’s AI advantage | Prime Minister of Canada. This is a pre-budget announcement of $2.4 billion going to AI related things including:

  • 2 billion “to build and provide access to computing capabilities and technological infrastructure for Canada’s world-leading AI researchers, start-ups, and scale-ups”
  • Setting up a “Canadian AI Safety Institute” with $50 million “to further the safe development and deployment of AI”. This sounds like a security rather than ethics institute as it will “help Canada better understand and protect against the risks of advanced or nefarious AI systems, including to specific communities.”
  • Funding for the “enforcement of the Artificial Intelligence and Data Act, with $5.1 million for the Office of the AI and Data Commissioner.”

There are also funds for startups, workers, and businesses.

The massive funding for infrastructure follows a weekend opinion piece in the Globe and Mail (March 21, 2024) on Canada’s AI infrastructure does not compute. The article suggests we have a lot of talent, but don’t have the metal. Well … now we are getting some metal.

The Deepfake Porn of Kids and Celebrities That Gets Millions of Views

It astonishes me that society apparently believes that women and girls should accept becoming the subject of demeaning imagery.

The New York Times has an opinion piece by Nicholas Kristof on deepfake porn,  The Deepfake Porn of Kids and Celebrities That Gets Millions of Views. The opinion says what is becoming obvious, that deepfake tools are being used overwhelmingly to create porn of women, whether celebrities, or girls people know. This artificial intelligence technology is not neutral, it is hurtful of a specific group – girls and women.

The article points to some research like a study 2023 State of Deepfakes by Home Security Heroes. Some of the key findings:

  • The number of deepfake videos is exploding (550% from 2019 to 2023)
  • 98% of the deepfake videos are porn
  • 99% of that porn women subjects
  • South Korean women singers and actresses are 53% of those targeted

It only takes about half an hour and almost no money to create a 60 second porn video from a single picture of someone. The ease of use and low cost is making these tools and services mainstream so that any yahoo can do it to his neighbour or schoolmate. It shouldn’t be surprising that we are seeing stories about young women being harassed by schoolmates that create and post deepfake porn. See stories here and here.

One might think this would be easy to stop – that the authorities could easily find and prosecute the creators of tools like ClothOff that lets you undress a girl whose photo you have taken. Alas, no. The companies hide behind false fronts. The Guardian has a podcast about trying to track down who owned or ran ClothOff.

What we don’t talk about is the responsibility of some research projects like LAION who have created open datasets for training text-to-image models that include pornographic images. They know their datasets include porn but speculate that this will help researchers.

You can learn more about deepfakes from AI Heelp!!!

The Power of AI Is In Our Hands. What Do We Need to Know?

The Power of AI Is In Our Hands. What Do We Need to Know?

The New Trail has a great feature story by Lisa Szabo on generative AI, The Power of AI Is In Our Hands. What Do We Need to Know? The story features a number of us at U of Alberta talking about the generative AI tools like ChatGPT. It quotes me talking about art and how I believe we will still want art by humans despite what AIs can generate. Perhaps it would be more accurate to say that we will enjoy and consume both AI generated entertainment and art that we believe was generated by people we know.

CIFAR welcomes five new Canada CIFAR AI Chairs – CIFAR

Today CIFAR announced five new Canada CIFAR AI Chairs who will join the more than 120 Chairs already appointed at Canada’s three National AI Institutes (Amii in Edmonton, Mila in Montréal, and the Vector Institute in Toronto).

Today they announced that I have been appointed a Canada CIFAR AI Chair, CIFAR welcomes five new Canada CIFAR AI Chairs – CIFAR. Here is the U of A Folio story.

Hurrah!

The Lives of Literary Characters

The goal of this project is to generate knowledge about the behaviour of literary characters at large scale and make this data openly available to the public. Characters are the scaffolding of great storytelling. This Zooniverse project will allow us to crowdsource data to train AI models to better understand who characters are and what they do within diverse narrative worlds to answer one very big question: why do human beings tell stories?

Today we are going live on Zooinverse with our Citizen Science (crowdsourcing) project, The Lives of Literary Characters. The goal of the project is offer micro-tasks that allow volunteers to annotate literary passages that help annotate training data. It will be interesting to see if we get a decent number of volunteers.

Before setting this up we did some serious reading around the ethics of crowdsourcing as we didn’t want to just exploit readers.

 

OpenAI’s GPT store is already being flooded with AI girlfriend bots

OpenAI’s store rules are already being broken, illustrating that regulating GPTs could be hard to control

From Slashdot I learned about a stroy on how OpenAI’s GPT store is already being flooded with AI girlfriend bots. It isn’t particularly surprising that you can get different girlfriend bots. Nor is it surprising that these would be something you can build in ChatGPT-4. ChatGPT is, afterall, a chatbot. What will be interesting to see is whether these chatbot girlfriends are successful. I would have imagined that men would want pornographic girlfriends and that the market for friends would be more for boyfriends along the lines of what Replika offers.

Column: AI investors say they’ll go broke if they have to pay for copyrighted works. Don’t believe it

AI investors say their work is so important that they should be able to trample copyright law on their pathway to riches. Here’s why you shouldn’t believe them.

Michael Hiltzik has a nice colum about how  AI investors say they’ll go broke if they have to pay for copyrighted works. Don’t believe it. He quotes the venture capital firm investing a lot in AI, Adreessen Horowitz as saying,

The only way AI can fulfill its tremendous potential is if the individuals and businesses currently working to develop these technologies are free to do so lawfully and nimbly.

This is like saying that the businesses of the mafia could fulfill their potential if they were allowed to do so lawfully and nimbly. It also assumes there is tremendous potential, and no pernicious side effects to AI. Do we really know there is positive potential and that it is tremendous?

Hiltzik is quite good on the issue of training on copyrighted material, something playing out as we speak. I suspect that if the courts allow the free use of large content platforms for model training that we will then find these collections of content being sequestered behind license walls that prevent their scraping.

How AI Image Generators Make Bias Worse – YouTube

A team at the LIS (London Interdisciplinary School) have created a great short video on the biases of AI image generators. The video covers the issues quickly and is documented with references you can follow for more. I had been looking at how image generators portrayed academics like philosophers, but this reports on research that went much further.

What is also interesting is how this grew out of a LIS undergrad’s first year project. It says something about LIS that they encourage and build on such projects. This got me wondering about the LIS which I had never heard of before. It seems to be a new teaching college in London, UK that is built around interdisciplinary programmes, not departments, that deal with “real-world problems.” It sounds a bit like problem-based learning.

Anyway, it will be interesting to watch how it evolves.

CEO Reminds Everyone His Company Collects Customers’ Sleep Data to Make Zeitgeisty Point About OpenAI Drama

The Eight Sleep pod is a mattress topper with a terms of service and a privacy policy. The company “may share or sell” the sleep data it collects from its users.

From SlashDot a story about how a CEO Reminds Everyone His Company Collects Customers’ Sleep Data to Make Zeitgeisty Point About OpenAI Drama. The story is worrisome because of the data being gathered by a smart mattress company and the use it is being put to. I’m less sure of the CEO’s (Matteo Franceschetti) inferences from his data and his call to “fix this.” How would Eight Sleep fix this? Sell more product?