Metaculus on AGI Outcomes

Listening to Jacob Steinhardt on The Hinton Lectures™ I learned about Metaculus, which is a forecasting service which is a public benefit company. It has a focus area on AI Progress with lots of AI related forecasts, (which seems to be a huge area of interest.) This service coordinates human forecasts and builds infrastructure to facilitate others in forecasting.

Neat!

UNESCO – Artificial Intelligence for Information Accessibility (AI4IA) Conference

Yesterday I organized a satellite panel for the UNESCO – Artificial Intelligence for Information Accessibility (AI4IA) Conference. This full conference takes place on GatherTown, a conferencing system that feels like an 8-bit 80s game. You wander around our AI4IA conference space and talk with others who are close and watch short prerecorded video talks of which there are about 60. I’m proud that Amii and the University of Alberta provided the technical support and funding to make the conference possible. The videos will also be up on YouTube for those who don’t make the conference.

The event we organized at the University of Alberta on Friday was an online panel on What is Responsible in Responsible Artificial Intelligence with Bettina Berendt, Florence Chee, Tugba Yoldas, and Katrina Ingram.

Bettina Berendt looked at what the Canadian approach to responsible AI could be and how it might be short sighted. She talked about a project that, like a translator, lets a person “translate” their writing in whistleblowing situations into prose that won’t identify them. It helps you remove the personal identifiable signal from the text. She then pointed out how this might be responsible, but might also lead to problems.

Florence Chee talked about how responsibility and ethics should be a starting point rather than an afterthought.

Tugba Yoldas talked about how meaningful human control is important to responsible AI and what it takes for there to be control.

Katrina Ingram of Ethically Aligned AI nicely wrapped up the short talks by discussing how she advises organizations that want to weave ethics into their work. She talked about the 4 Cs: Context, Culture, Content, and Commitment.

 

In Ukraine War, A.I. Begins Ushering In an Age of Killer Robots

Driven by the war with Russia, many Ukrainian companies are working on a major leap forward in the weaponization of consumer technology.

The New York Times has an important story on how, In Ukraine War, A.I. Begins Ushering In an Age of Killer Robots. In short, the existential threat of the overwhelming Russian attack is creating a situation where Ukraine is developing a home-grown autonomous weapons industry that repurposes consumer technologies. Not only are all sorts of countries testing AI powered weapons in Ukraine, the Ukrainians are weaponizing cheap technologies and, in the process, removing a lot of the guardrails.

The pressure to outthink the enemy, along with huge flows of investment, donations and government contracts, has turned Ukraine into a Silicon Valley for autonomous drones and other weaponry.

There isn’t necessarily any “human in the loop” in the cheap systems they are developing. One wonders how the development of this industry will affect other conflicts. Could we see a proliferation of terrorist drone attacks put together following plans circulating on the internet?

The Deepfake Porn of Kids and Celebrities That Gets Millions of Views

It astonishes me that society apparently believes that women and girls should accept becoming the subject of demeaning imagery.

The New York Times has an opinion piece by Nicholas Kristof on deepfake porn,  The Deepfake Porn of Kids and Celebrities That Gets Millions of Views. The opinion says what is becoming obvious, that deepfake tools are being used overwhelmingly to create porn of women, whether celebrities, or girls people know. This artificial intelligence technology is not neutral, it is hurtful of a specific group – girls and women.

The article points to some research like a study 2023 State of Deepfakes by Home Security Heroes. Some of the key findings:

  • The number of deepfake videos is exploding (550% from 2019 to 2023)
  • 98% of the deepfake videos are porn
  • 99% of that porn women subjects
  • South Korean women singers and actresses are 53% of those targeted

It only takes about half an hour and almost no money to create a 60 second porn video from a single picture of someone. The ease of use and low cost is making these tools and services mainstream so that any yahoo can do it to his neighbour or schoolmate. It shouldn’t be surprising that we are seeing stories about young women being harassed by schoolmates that create and post deepfake porn. See stories here and here.

One might think this would be easy to stop – that the authorities could easily find and prosecute the creators of tools like ClothOff that lets you undress a girl whose photo you have taken. Alas, no. The companies hide behind false fronts. The Guardian has a podcast about trying to track down who owned or ran ClothOff.

What we don’t talk about is the responsibility of some research projects like LAION who have created open datasets for training text-to-image models that include pornographic images. They know their datasets include porn but speculate that this will help researchers.

You can learn more about deepfakes from AI Heelp!!!

OpenAI’s GPT store is already being flooded with AI girlfriend bots

OpenAI’s store rules are already being broken, illustrating that regulating GPTs could be hard to control

From Slashdot I learned about a stroy on how OpenAI’s GPT store is already being flooded with AI girlfriend bots. It isn’t particularly surprising that you can get different girlfriend bots. Nor is it surprising that these would be something you can build in ChatGPT-4. ChatGPT is, afterall, a chatbot. What will be interesting to see is whether these chatbot girlfriends are successful. I would have imagined that men would want pornographic girlfriends and that the market for friends would be more for boyfriends along the lines of what Replika offers.

Elon Musk, X and the Problem of Misinformation in an Era Without Trust

Elon Musk thinks a free market of ideas will self-correct. Liberals want to regulate it. Both are missing a deeper predicament.

Jennifer Szalai of the New York Times has a good book review or essay on misinformation and disinformation, Elon Musk, X and the Problem of Misinformation in an Era Without Trust. She writes about how Big Tech (Facebook and Google) benefit from the view that people are being manipulated by social media. It helps sell their services even though there is less evidence of clear and easy manipulation. It is possible that there is an academic business of Big Disinfo that is invested in a story about fake news and its solutions. The problem instead may be a problem of the authority of elites who regularly lie to the US public. This of the lies told after 9/11 to justify the “war on terror”; why should we believe any “elite”?

One answer is to call people to “Do your own research.” Of course that call has its own agenda. It tends to be a call for unsophisticated research through the internet. Of course, everyone should do their own research, but we can’t in most cases. What would it take to really understand vaccines through your own research, as opposed to joining some epistemic community and calling research the parroting of their truisms. With the internet there is an abundance of communities of research to join that will make you feel well-researched. Who needs a PhD? Who needs to actually do original research? Conspiracies like academic communities provide safe haven for networks of ideas.

On Making in the Digital Humanities

On Making in the Digital Humanities fills a gap in our understanding of digital humanities projects and craft by exploring the processes of making as much as the products that arise from it. The volume draws focus to the interwoven layers of human and technological textures that constitute digital humanities scholarship.

On Making in the Digital Humanities is finally out from UCL Press. The book honours the work of John Bradley and those in the digital humanities who share their scholarship through projects. Stéfan Sinclair and I first started work on it years ago and were soon joined by Juliane Nyhan and later Alexandra Ortolja-Baird. It is a pleasure to see it finished.

I co-wrote the Introduction with Nyhan and wrote a final chapter on “If Voyant then Spyral: Remembering Stéfan Sinclair: A discourse on practice in the digital humanities.” Stéfan passed during the editing of this.

All the (open) world’s a stage: how the video game Fallout became a backdrop for live Shakespeare shows

Free to roam through the post-apocalyptic game, one intrepid group has taken to performing the Bard. They have found an intent new audience, as well as the odd mutant scorpion

The Guardian has a nice story today about a Shakespeare troupe who are staging plays in Fallout 76All the (open) world’s a stage: how the video game Fallout became a backdrop for live Shakespeare shows.

It seems to me that this is related to the various activities that were staged in Second Life and other environments. It also has connections to the Machinima phenomenon where people use 3D environments like games to stage acts that are filmed.

Of course the problem with Fallout 76 is the performers can get attacked during a performance.

How AI image generators work, like DALL-E, Lensa and stable diffusion

Use our simulator to learn how AI generates images from “noise.”

The Washington Post has a nice explainer on how text to image generators work: How AI image generators work, like DALL-E, Lensa and stable diffusion. They let you play with the generator, though you have to stick with the predefined phrases. What I hadn’t realized was the role of static noise in the diffusion model. Not sure how it works, but it seems to train the AI to recognize and then generate in noisy images.

Workplace Productivity: Are You Being Tracked?

“We’re in this era of measurement but we don’t know what we should be measuring,” said Ryan Fuller, former vice president for workplace intelligence at Microsoft.

The New York Times has essay on  Workplace Productivity: Are You Being Tracked? The neat thing is that the article tracks your reading of it to give you a taste of the sorts of tracking now being deployed for remote (and on site) workers. If you pause and don’t scroll it puts up messages like “Hey are you still there? You’ve been inactive for 32 seconds.”

But Ms. Kraemer, like many of her colleagues, found that WorkSmart upended ideas she had taken for granted: that she would have more freedom in her home than at an office; that her M.B.A. and experience had earned her more say over her time.

What is new is the shift to remote work due to Covid. Many companies are fine with remote work if they can guarantee productivity. The other thing that is changing is the use of tracking for not just manual work, but also for white-collar work.

I’ve noticed that this goes hand in hand with self-tracking. My Apple Watch/iPhone offer a weekly summary of my browsing. It also offers to track my physical activity. If I go for a walk, somewhere close to a kilometer it asks if I want this tracked as exercise.

The questions raised by the authors of the New York Time article include Whether we are tracking the right things? What are we losing with all this tracking? What is happening to all this data? Can companies sell the data about employees?

The article is by Jodi Kantor and Arya Sundaram. It is produced by Aliza Aufrichtig and Rumsey Taylor. Aug. 14, 2022