Conspiracy Theories

Big Valley Creation Science Museum

A couple of weeks ago we traveled around Southern Alberta visiting out of the way museums, including the Creation Science Museum in Big Valley. To visit we had to book over the phone and we were given an intense and argumentative tour of the small museum. There is much to be said about the arguments I had with the guide, who was well informed, experienced at arguing, and passionate about his beliefs. One of the things that struck me was how his belief system was a network of interlocking and supporting conspiracy theories including:

  • A belief that the universe was created by God about 7,000 to 10,000 years ago as set out by Genesis. Much of the museum was dedicated to debunking evolutionary science as speculation in order to set up the “truth.”
  • There is a group of Illuminati who control the world. I think this is connected to the New World Order, but I wasn’t following the argument.
  • Noah’s Arc is on mount Ararat in Turkey, but the Turkish government is preventing access.
  • The apocalypse is coming soon.
  • Aliens are actually demons visiting earth.

I probably didn’t get all the network of theories right. There was a final exhibit that traced the Windsors back to Adam and Eve which was supposed to mean something. What interested me after the encounter was the passion of conspiracy. What is it that is so attractive about these theories? Why do they go together? There is a lot of good stuff out there including 7 Insights From Interviewing Conspiracy Theory Believers and Understanding Conspiracy Theories. To summarize,

  • Believing in conspiracies provides community and empowerment. If you are serious enough about them it can also provide an identity.
  • Conspiracies explain the world more thoroughly and simply. They often present simple answers – ie. that there is a small hidden group running things. Such answers are more satisfying than “its complicated.”
  • Conspiracies present themselves as truth in a postmodern age that makes it difficult to know what to believe. They are also imaginative and often remarkably thorough.
  • The theories have similar patterns of ideas which makes them fit well with each other. If you have “researched” one then you will probably believe others.
  • Believing in a conspiracy makes you an alternative type of expert who is “woke” to the truth about the world. Such expertise is a short cut around the expertise that comes from getting an education which can be rather time consuming.
  • Most conspiracies include explanations about why the theory is not believed by experts. As an added bonus, there is also a story of persecution that lets you take on the identity of noble victim.
  • To know a truth that most others don’t confers power and exceptionalism on you. You understand where others don’t. It also is a sign of rugged freedom of thought as you have not been lulled into following the herd.
  • Most of these theories do not call for immediate action, after all, there is nothing you can do when the world is controlled by the Illuminati. In some cases they relieve one of the need to act; no need to worry about climate change if it is fake news. That said, some conspiracies do motivate some people to terrible acts (think about how incel theory has inspired some) and they do provide theories of agency.

By contrast, the theories I believe in are tentative, dependent on trusting others, without heroic opportunities, incomplete and often contradictory. They do, however, call for action, even solidarity, but with humility.

A New Way to Inoculate People Against Misinformation

A new set of online games holds promise for helping identify and prevent harmful misinformation from going viral.

Instead of fighting misinformation after it’s already spread, some researchers have shifted their strategy: they’re trying to prevent it from going viral in the first place, an approach known as “prebunking.” Prebunking attempts to explain how people can resist persuasion by misinformation. Grounded in inoculation theory, the approach uses the analogy of biological immunization. Just as weakened exposure to a pathogen triggers antibody production, inoculation theory posits that pre-emptively exposing people to a weakened persuasive argument builds people’s resistance against future manipulation.

Prebunking is being touted as A New Way to Inoculate People Against Misinformation. The idea is that one can inoculate people against the manipulation of misinformation. This strikes me as similar to how we were taught to “read” advertising in order to inoculate us to corporate manipulation. Did it work?

The Cambridge Social Decision-Making Lab has developed some games like the Bad News Game to build psychological resistance to misinformation.

That viruses and inoculation can be metaphors for patterns of psychological influence is worrisome. It suggests a lack of agency or reflection among people. How are memes not like viruses?

The Lab has been collaborating with Google’s Jigsaw on Inoculation Science which has developed the games and videos to explain misinformation.

Replaying Japan 2021

Yesterday (Friday the 13th of August) we finished the 5th day of the Replaying Japan 2021 conference. The conference was organized by the AI for Society Signature Area, the Kule Institute for Advanced Study, and the Prince Takamado Japan Centre all at the University of Alberta.

At the conference I organized a roundtable about the Replaying Japan conference itself titled “Ten Years of Dialogue: Reflecting on Replaying Japan.” I moderated the discussion and started with a brief history that I quote from here:

The Replaying Japan conference will have been going now for ten years if you include its predecessor symposium that was held in 2012 in Edmonton, Canada.

The encounter around Japanese Game Culture came out of the willingness of Ritsumeikan University to host Geoffrey Rockwell as a Japan Foundation Japan Studies Fellow in Kyoto in 2011. While Rockwell worked closely with researchers like Prof. INABA at the Ritsumeikan Digital Humanities Centre for Japanese Arts and Culture, he also got to meet Professors Nakamura and Koichi at the Ritsumeikan Centre for Game Studies. Out of these conversations it became clear that game studies in the West and game studies in Japan were not in conversation. The research communities were siloes working in their own languages that didn’t intermingle much. We agreed that we needed to try to bridge the communities and organized a first small symposium in 2012 in Edmonton with support from the Prince Takamado Japan Centre at the University of Alberta. At a meeting right after the symposium we developed the idea for a conference that could go back and forth between Japan and the West called Replaying Japan. Initially the conference just went back and forth between Kyoto and Edmonton, but we soon started going to Europe and the USA which expanded the network.

(From the abstract for the roundtable)

At the conference I was also part two papers that were presented others:

  1. Keiji Amano presented on “The Rise and Fall of Popular Amusement: Operation Invader Shoot Down.” This paper looked at Nagoya tabloids and how they described the explosion of Space Invaders as a threat to the pachinko industry.
  2. Mimi Okabe presented on “Moral Management in Japanese Game Companies” which discussed how certain Japanese game companies manager their ethical reputation. We looked as specific issues like forced labour in the supply chain, gender issues, and work-life balance.

You can see the conference Schedule here.

Apple will scan iPhones for child pornography

Apple unveiled new software Thursday that scans photos and messages on iPhones for child pornography and explicit messages sent to minors in a major new effort to prevent sexual predators from using Apple’s services.

The Washington Post and other news venues are reporting that Apple will scan iPhones for child pornography. As the subtitle to the article puts it “Apple is prying into iPhones to find sexual predators, but privacy activists worry governments could weaponize the feature.” Child porn is the go-to case when organizations want to defend surveillance.

The software will scan without our knowledge or consent which raises privacy issues. What are the chances of false positives? What if the tool is adapted to catch other types of images? Edward Snowden and the EFF have criticized this move. It seems inconsistent with Apple’s firm position on privacy and refusal to even unlock

It strikes me that there is a great case study here.

Pentagon believes its precognitive AI can predict events ‘days in advance’

The US military is testing AI that helps predict events days in advance, helping it make proactive decisions..

Endgadget has a story on how the Pentagon believes its precognitive AI can predict events ‘days in advance’. It is clear that for most the value in AI and surveillance is prediction and yet there are some fundamental contradictions. As Hume pointed out centuries ago, all prediction is based on extrapolation from past behaviour. We simply don’t know the future; the best we can do is select features of past behaviour that seemed to do a good job predicting (retrospectively) and hope they will work in the future. Alas, we get seduced by the effectiveness of retrospective work. As Smith and Cordes put it in The Phantom Pattern Problem:

How, in this modern era of big data and powerful computers, can experts be so foolish? Ironically, big data and powerful computers are part of the problem. We have all been bred to be fooled—to be attracted to shiny patterns and glittery correlations. (p. 11)

What if machine learning and big data were really best suited for suited for studying the past and not predicting the future? Would there be the hype? the investment?

When the next AI winter comes we in the humanities could pick up the pieces and use these techniques to try to explain the past, but I’m getting ahead of myself and predicting another winter.

What Ever Happened to IBM’s Watson? – The New York Times

IBM’s artificial intelligence was supposed to transform industries and generate riches for the company. Neither has panned out. Now, IBM has settled on a humbler vision for Watson.

The New York Times has a story about What Ever Happened to IBM’s Watson? The story is a warning to all of us about the danger of extrapolating from intelligence behaviour in one limited domain to others. Watson got good enough at trivia question answering (or posing) to win at Jeopardy!, but that didn’t scale out.

IBM’s strategy is interesting to me. Developing an AI to win at a game like Jeopardy! was what IBM did with Deep Blue that won at chess in 1997. Winning at a game considered paradigmatically a game of intelligence is a great way to get public relations attention.

Interestingly what seems to be working with Watson is not the moon shot game playing type of service, but the automation of basic natural language processing tasks.

Having recently read Edwin Black’s IBM and the Holocaust: The Strategic Alliance Between Nazi Germany and America’s Most Powerful Corporation I must say that the choice of the name “Watson” grates. Thomas Watson was responsible for IBM’s ongoing engagement with the Nazi’s for which he got a medal from Hitler in 1937. Watson didn’t seem to care how IBM’s data processing technology was being used to manage people especially Jews. I hope the CEOs of AI companies today are more ethical.

The ethics of regulating AI: When too much may be bad

By trying to put prior restraints on the release of algorithms, we will make the same mistake Milton’s censors were making in trying to restrict books before their publication. We will stifle the myriad possibilities inherent in an evolving new technology and the unintended effects that it will foster among new communities who can extend its reach into novel and previously unimaginable avenues. In many ways it will defeat our very goals for new technology, which is its ability to evolve, change and transform the world for the better.

3 Quarks Daily has another nice essay on ethics and AI by Ashutosh Jogalekar. This one is about The ethics of regulating AI: When too much may be bad. The argument is that we need to careful about regulating algorithms preemptively. As quote above makes clear he makes three related points:

  • We need to be careful censoring algorithms before they are tried.
  • One reason is that it is very difficult to predict negative or positive outcomes of new technologies. Innovative technologies almost always have unanticipated effects and censoring them would limit our ability to learn about the effects and benefit from them.
  • Instead we should manage the effects as they emerge.

I can imagine some responses to this argument:

  • Unanticipated effects are exactly what we should be worried about. The reason for censoring preemptively is precisely to control for unanticipated effects. Why not encourage better anticipation of effects.
  • Unanticipated effects, especially network effects, often only manifest themselves when the technology is used at scale. By then it can be difficult to roll back the technology. Precisely when there is a problem is when we can’t easily change the way the technology is used.
  • One person’s unanticipated effect is another’s business or another’s freedom. There is rarely consensus about the effect of effects.

I also note how Jogalekar talks about the technology as if it had agency. He talks about the technologies ability to evolve. Strictly speaking the technology doesn’t evolve, but our uses do. When it comes to innovation we have to be careful not to ascribe agency to technology as if it was some impersonal force we can resist.

Excavating AI

The training sets of labeled images that are ubiquitous in contemporary computer vision and AI are built on a foundation of unsubstantiated and unstable epistemological and metaphysical assumptions about the nature of images, labels, categorization, and representation. Furthermore, those epistemological and metaphysical assumptions hark back to historical approaches where people were visually assessed and classified as a tool of oppression and race science.

Excavating AI is an important paper by Kate Crawford and Trevor Paglen that looks at “The Politics of Image in Machine Learning Training.” They look at different ways that politics and assumptions can creep into training datasets that are (were) widely used in AI.

  • There is the overall taxonomy used to annotate (label) the images
  • There are the individual categories used that could be problematic or irrelevant
  • There are the images themselves and how they were obtained

The training sets of labeled images that are ubiquitous in contemporary computer vision and AI are built on a foundation of unsubstantiated and unstable epistemological and metaphysical assumptions about the nature of images, labels, categorization, and representation. Furthermore, those epistemological and metaphysical assumptions hark back to historical approaches where people were visually assessed and classified as a tool of oppression and race science.

They point out how many of the image datasets used for face recognition have been trimmed or have disappeared as they got criticized, but they may still be influential as they were downloaded and are circulating in AI labs. These datasets with their assumptions have also been used to train commercial tools.

I particularly like how the authors discuss their work as an archaeology, perhaps in reference to Foucault (though they don’t mention him.)

I would argue that we need an ethics of care and repair to maintain these datasets usefully.

InspiroBot

I am an artificial intelligence dedicated to generating unlimited amounts of unique inspirational quotes for endless enrichment of pointless human existence.

InspiroBot is a web site with an AI bot that produces inspiring quotes and puts them on images, sometimes with hilarious results. You can generate new quotes over and over and the system, while generating them also interacts with you saying things like “You’re my favorite user!” (I wonder if I’m the only one to get this or if the InspiroBot flatters all its users.)

It also has a Mindfulness mode where is just keeps on putting up pretty pictures and playing meditative music while reading out “insprirations.” Very funny as in “Take in how your bodily orifices are part of heaven…”

While the InspiroBot may seem like toy, there is a serious side to this. First, it is powered by an AI that generates plausible inspirations (most of the time.) Second, it shows how a model of how we might use AI as a form of prompt – generating media that provokes us. Third, it shows the deep humour of current AI. Who can take it seriously.

Thanks to Chelsea for this.

What happens when pacifist soldiers search for peace in a war video game

What happens to pacifist soldiers stuck in a war video game? A history of military desertion with the aid of Battlefield V

Aeon has a very interesting 20+ minute short video on What happens when pacifist soldiers search for peace in a war video game. The video looks at how one might desert a war in a war video game. Of course, the games don’t let you, but there are work arounds.

This is the second smart video shot in game by folk associated with Total Refusal a “Digital Disarmament Movement”.