A New Way to Inoculate People Against Misinformation

A new set of online games holds promise for helping identify and prevent harmful misinformation from going viral.

Instead of fighting misinformation after it’s already spread, some researchers have shifted their strategy: they’re trying to prevent it from going viral in the first place, an approach known as “prebunking.” Prebunking attempts to explain how people can resist persuasion by misinformation. Grounded in inoculation theory, the approach uses the analogy of biological immunization. Just as weakened exposure to a pathogen triggers antibody production, inoculation theory posits that pre-emptively exposing people to a weakened persuasive argument builds people’s resistance against future manipulation.

Prebunking is being touted as A New Way to Inoculate People Against Misinformation. The idea is that one can inoculate people against the manipulation of misinformation. This strikes me as similar to how we were taught to “read” advertising in order to inoculate us to corporate manipulation. Did it work?

The Cambridge Social Decision-Making Lab has developed some games like the Bad News Game to build psychological resistance to misinformation.

That viruses and inoculation can be metaphors for patterns of psychological influence is worrisome. It suggests a lack of agency or reflection among people. How are memes not like viruses?

The Lab has been collaborating with Google’s Jigsaw on Inoculation Science which has developed the games and videos to explain misinformation.

Replaying Japan 2021

Yesterday (Friday the 13th of August) we finished the 5th day of the Replaying Japan 2021 conference. The conference was organized by the AI for Society Signature Area, the Kule Institute for Advanced Study, and the Prince Takamado Japan Centre all at the University of Alberta.

At the conference I organized a roundtable about the Replaying Japan conference itself titled “Ten Years of Dialogue: Reflecting on Replaying Japan.” I moderated the discussion and started with a brief history that I quote from here:

The Replaying Japan conference will have been going now for ten years if you include its predecessor symposium that was held in 2012 in Edmonton, Canada.

The encounter around Japanese Game Culture came out of the willingness of Ritsumeikan University to host Geoffrey Rockwell as a Japan Foundation Japan Studies Fellow in Kyoto in 2011. While Rockwell worked closely with researchers like Prof. INABA at the Ritsumeikan Digital Humanities Centre for Japanese Arts and Culture, he also got to meet Professors Nakamura and Koichi at the Ritsumeikan Centre for Game Studies. Out of these conversations it became clear that game studies in the West and game studies in Japan were not in conversation. The research communities were siloes working in their own languages that didn’t intermingle much. We agreed that we needed to try to bridge the communities and organized a first small symposium in 2012 in Edmonton with support from the Prince Takamado Japan Centre at the University of Alberta. At a meeting right after the symposium we developed the idea for a conference that could go back and forth between Japan and the West called Replaying Japan. Initially the conference just went back and forth between Kyoto and Edmonton, but we soon started going to Europe and the USA which expanded the network.

(From the abstract for the roundtable)

At the conference I was also part two papers that were presented others:

  1. Keiji Amano presented on “The Rise and Fall of Popular Amusement: Operation Invader Shoot Down.” This paper looked at Nagoya tabloids and how they described the explosion of Space Invaders as a threat to the pachinko industry.
  2. Mimi Okabe presented on “Moral Management in Japanese Game Companies” which discussed how certain Japanese game companies manager their ethical reputation. We looked as specific issues like forced labour in the supply chain, gender issues, and work-life balance.

You can see the conference Schedule here.

Diggin’ in the Carts: Japanese video game music history

Meet the men and women responsible for creating the most iconic tunes in video game history.

We finished up the Replaying Japan 2021 conference today. The conference was online using Zoom and Gather Town where there was a hidden easter egg with a link to Diggin’ in the Carts: Japanese video game music history, a 5 part documentary from Red Bull that is quite good. The 5 15 minute episodes are part of the first season. Not sure if there will be other seasons, but there is a related radio show with multiple seasons. The documentary episodes nicely feature the composers and experts talking about the Japanese history along with other musicians commenting on the influence of the early music which would have been heard over and over in houses with Japanese consoles.

The creator of the show is Nick Dwyer who is interviewed here about the documentary and associated radio show..

Trump Tweet Archive

All 50,000+ of Trump’s tweets, instantly searchable

Thanks to Kaylin I found the Trump Twitter Archive: TTA – Search. Its a really nice clean site that lets you search or filter Trump’s tweets from when he was elected to when his account was shut down on January 8th, 2021. You can also download the data if you want to try other tools.

I find reading his tweets now to be quite entertaining. Here are two back to back tweets that seems to almost contradict each other. First he boasts about the delivery of vaccines, and then talks about Covid as Fake News!

Jan 3rd 2021 – 8:14:10 AM EST: The number of cases and deaths of the China Virus is far exaggerated in the United States because of @CDCgov’s ridiculous method of determination compared to other countries, many of whom report, purposely, very inaccurately and low. “When in doubt, call it Covid.” Fake News!

Jan 3rd 2021 – 8:05:34 AM EST: The vaccines are being delivered to the states by the Federal Government far faster than they can be administered!

Apple will scan iPhones for child pornography

Apple unveiled new software Thursday that scans photos and messages on iPhones for child pornography and explicit messages sent to minors in a major new effort to prevent sexual predators from using Apple’s services.

The Washington Post and other news venues are reporting that Apple will scan iPhones for child pornography. As the subtitle to the article puts it “Apple is prying into iPhones to find sexual predators, but privacy activists worry governments could weaponize the feature.” Child porn is the go-to case when organizations want to defend surveillance.

The software will scan without our knowledge or consent which raises privacy issues. What are the chances of false positives? What if the tool is adapted to catch other types of images? Edward Snowden and the EFF have criticized this move. It seems inconsistent with Apple’s firm position on privacy and refusal to even unlock

It strikes me that there is a great case study here.

Pentagon believes its precognitive AI can predict events ‘days in advance’

The US military is testing AI that helps predict events days in advance, helping it make proactive decisions..

Endgadget has a story on how the Pentagon believes its precognitive AI can predict events ‘days in advance’. It is clear that for most the value in AI and surveillance is prediction and yet there are some fundamental contradictions. As Hume pointed out centuries ago, all prediction is based on extrapolation from past behaviour. We simply don’t know the future; the best we can do is select features of past behaviour that seemed to do a good job predicting (retrospectively) and hope they will work in the future. Alas, we get seduced by the effectiveness of retrospective work. As Smith and Cordes put it in The Phantom Pattern Problem:

How, in this modern era of big data and powerful computers, can experts be so foolish? Ironically, big data and powerful computers are part of the problem. We have all been bred to be fooled—to be attracted to shiny patterns and glittery correlations. (p. 11)

What if machine learning and big data were really best suited for suited for studying the past and not predicting the future? Would there be the hype? the investment?

When the next AI winter comes we in the humanities could pick up the pieces and use these techniques to try to explain the past, but I’m getting ahead of myself and predicting another winter.

One letter at a time: index typewriters and the alphabetic interface — Contextual Alternate

Drawing on a selection of non-keyboard ‘index’ typewriters, this exhibition explores how input mechanisms and alphabetic arrangements were devised and contested continually in the process of popularising typewrites as personal objects. The display particularly looks at how the letters of the alphabe

Reading Thomas S. Mullaney’s The Chinese Typewriter I’m struck by the variety of different typewriting solutions. As you can see from this exhibit web site, One letter at a time: index typewriters and the alphabetic interface — Contextual Alternate, there were all sorts of alternatives to the QWERTY keyboard early on, and many of them could accommodate more keys so as to support other languages including a non-alphabetic script like Chinese. As Mullaney points out there is a history to the emergence of the typewriter that we assume is normal.

This history of our collapsing technolinguistic imaginary took place across four phases: an initial period of plurality and fluidity in the West in the late 1800s, in which there existed a diverse assortment of machines through which engineers, inventors, and everyday individuals could imagine the very technology of typewriting, as well as its potential expansion to non-English and non-Latin writing systems; second, a period of collapsing possibility around the turn of the century in which a specific typewriter form—the shift-keyboard typewriter—achieved unparalleled dominance, erasing prior alternatives first from the market and then from the imagination; next, a period of rapid globalization from the 1900s onward in which the technolinguistic monoculture of shift-keyboard typewriting achieved global proportions, becoming the technological benchmark against which was measured the “efficiency” and thus modernity of an ever-increasing number of world scripts; and, finally, the machine’s encounter with the one world script that remained frustratingly outside its otherwise universal embrace: Chinese.

Mullaney, Thomas S.. The Chinese Typewriter (Kindle Locations 1183-1191). MIT Press. Kindle Edition.

What Ever Happened to IBM’s Watson? – The New York Times

IBM’s artificial intelligence was supposed to transform industries and generate riches for the company. Neither has panned out. Now, IBM has settled on a humbler vision for Watson.

The New York Times has a story about What Ever Happened to IBM’s Watson? The story is a warning to all of us about the danger of extrapolating from intelligence behaviour in one limited domain to others. Watson got good enough at trivia question answering (or posing) to win at Jeopardy!, but that didn’t scale out.

IBM’s strategy is interesting to me. Developing an AI to win at a game like Jeopardy! was what IBM did with Deep Blue that won at chess in 1997. Winning at a game considered paradigmatically a game of intelligence is a great way to get public relations attention.

Interestingly what seems to be working with Watson is not the moon shot game playing type of service, but the automation of basic natural language processing tasks.

Having recently read Edwin Black’s IBM and the Holocaust: The Strategic Alliance Between Nazi Germany and America’s Most Powerful Corporation I must say that the choice of the name “Watson” grates. Thomas Watson was responsible for IBM’s ongoing engagement with the Nazi’s for which he got a medal from Hitler in 1937. Watson didn’t seem to care how IBM’s data processing technology was being used to manage people especially Jews. I hope the CEOs of AI companies today are more ethical.

ImageGraph: a visual programming language for the Visual Digital Humanities

Leonardo Impett has a nice demonstration here of  ImageGraph: a visual programming language for the Visual Digital Humanities. ImageGraph is a visual programming environment that works with Google Colab. When you have your visual program you can compile it into Python in a Colab notebook and then run that notebook. The visual program is stored in your Github account and the Python code can, of course, be used in larger projects.

The visual programming language has a number of functions for handling images and using artificial intelligence techniques on them. It also has text functions, but they are apparently not fully worked out.

I love the way Impett combines off the shelf systems while adding a nice visual development environment. Very clean.

The Best of Voyager, Part 1

The Digital Antiquarian has posted the first part of a multipart essay on The Best of Voyager, Part 1. The Voyager Company was a pioneer in the development and distribution of interactive CD-ROMs in the 1990s. They published a number of classics like Amanda Stories, Beethoven’s Ninth Symphony CD-ROM, and Poetry in Motion. They also published some hybrid laserdisc/software combinations like The National Gallery of Art.

Unlike the multimedia experiments coming out of university labs, these CD-ROMs were designed to be commercial products and did sell. I remember ordering a number for the University Toronto Computing Services so we could show what multimedia could do. They were some of the first products to show in a compelling way how interactivity could make a difference. Many included interactive audio, like the Beethoven one, others used Quicktime (digital video) for the first time.

All of this was, to some extent, made anachronistic when the web took off and began to incorporate multimedia effectively. Voyager set the scene remediating earlier works (like the short film of A Hard Day’s Night). But CD-ROMs were, in their turn, replaced.

My favourite was The Residents Freak Show. This was a strange 3D-like tour of the music of The Residents that was organized around a freak show motif.

Thanks to Peter for this.