The 100 Worst Ed-Tech Debacles of the Decade

With the end of the year there are some great articles showing up reflecting on debacles of the decade. One of my favorites is The 100 Worst Ed-Tech Debacles of the DecadeEd-Tech is one of those fields where over and over techies think they know better. Some of the debacles Watters discusses:

  • 3D Printing
  • The “Flipped Classroom” (Full disclosure: I sat on a committee that funded these.)
  • Op-Eds to ban laptops
  • Clickers
  • Stories about the end of the library
  • Interactive whiteboards
  • The K-12 Cyber Incident Map (Check it out here)
  • IBM Watson
  • The Year of the MOOC

This collection of 100 terrible ideas in instructional technology should be mandatory reading for all of us who have been keen on ed-tech. (And I am one who has develop ed-tech and oversold it.) Each item is a mini essay with links worth following.

The weird, wonderful world of Y2K survival guides

The category amounted to a giant feedback loop in which the existence of Y2K alarmism led to more of the same.

Harry McCracken in Fast Company has a great article on The weird, wonderful world of Y2K survival guides: A look back (Dec. 13, 2019).The article samples some of the hype around the disruptive potential of the millenium. Particularly worrisome are the political aspects of the folly. People (again) predicted the fall of the government and the need to prepare for the ensuing chaos. (Why is it that some people look so forward to such collapse?)

Technical savvy didn’t necessarily inoculate an author against millennium-bug panic. Edward Yourdon was a distinguished software architect with plenty of experience relevant to the challenge of assessing the Y2K bug’s impact. His level of Y2K gloominess waxed and waned, but he was prone to declarations such as “my own personal Y2K plans include a very simple assumption: the government of the U.S., as we currently know it, will fall on 1/1/2000. Period.”

Interestingly, few people panicked despite all the predictions. Most people, went out and celebrated.

All of this should be a warning for those of us who are tempted to predict that artificial intelligence or social media will lead to some sort of disaster. There is an ethics to predicting ethical disruption. Disruption, almost by definition, never happens as you thought it would.

ParityBOT: Twitter bot

ParityBOT is a chatbot developed here in Edmonton that tweets positive things about women in politics in response to hateful tweets. It send empowering messages.

You can read about it in a CBC story, Engineered-in-Edmonton Twitter bot combats misogyny on the campaign trail.

The bot follows all women candidates in the election and uses some sort of AI or sentiment detection to identify nasty tweets aimed at them and then responds with a positive message from a collection crowdsources from the public. What isn’t clear is if the positive message is sent to the offending tweeter or just posted generally?

ParityBOT was developed by ParityYEG which is a collaboration between the Alberta Machine Intelligence Institute and scientist Kory Mathewson.

Slaughterbots

On the Humanist discussion list John Keating recommended the short video Slaughterbots that presents a plausible scenario where autonomous drones are used to target dissent using social media data. Watch it! It is well done and presents real issues in a credible short video.

While the short is really about autonomous weapons and the need to ban them, I note that one of ideas included is that dissent could be silenced by using social media to target people. The scenario imagines that university students who shared a dissenting video on social media have their data harvested (including images of their faces) and the drones target them using face recognition. Science fiction, but suggestive of how social media presence can be used for control.

Continue reading Slaughterbots

Of Course Citizens Should Be Allowed to Kick Robots | WIRED

Seen in the wild, robots often appear cute and nonthreatening. This doesn’t mean we shouldn’t be hostile.

Wired magazine has a nice short piece that suggests, Of Course Citizens Should Be Allowed to Kick Robots. In some ways the point of the essay is not how to treat robots, but whether we should tolerate these surveillance robots on our sidewalks.

Blanket no-punching policies are useless in a world full of terrible people with even worse ideas. That’s even more true in a world where robots now do those people’s bidding. Robots, like people, are not all the same. While K5 can’t threaten bodily harm, the data it collects can cause real problems, and the social position it puts us all in—ever suspicious, ever watched, ever worried about what might be seen—is just as scary. At the very least, it’s a relationship we should question, not blithely accept as K5 rolls by.

The question, “Is it OK to kick a robot?” is a good one that nicely brings the ethics of human-robot co-existence down to earth and onto our side walks. What sort of respect do the proliferation of robots and scooters deserve? How should we treat these stupid things when enter our everyday spaces?

Di GRA 2019 And Replaying Japan 2019

Read my conference notes on Di GRA 2019 And Replaying Japan 2019 here. The two conferences were held back to back (with a shared keynote) in Kyoto at Ritsumeikan.

Kieji Amano deserves a lot of credit for putting together the largest Replaying Japan programme ever. The folks at the Ritsumeikan Center for Games Studies should also be thanked for organizing the facilities for both conferences. They have established themselves as leaders in Japan in the field.

I gave two papers:

  • “The End of Pachinko” (given with Amano) looked at the decline of pachinko and traditional forms of gambling in the face of the legalization of casinos. It looked at different types of ends, like the ends of machines.
  • “Work Culture in Early Japanese Game Development” (with Amano, Okabe, Ly and Whistance-Smith) used text analysis of Szczepaniak’ series of interviews, the Untold History of Japanese Game Developers, as a starting point to look at themes like stress and gender.

The quality of the papers in both conferences was very high. I expect this of DiGRA, but it was great to see that Replaying Japan, which is more inclusive, it getting better and better. I was particularly impressed by some of the papers by our Japanese colleagues like a paper delivered by Kobayashi on the “Early History of Hobbyist Production Filed of Video Games and its Effect on Game Industries in Japan.” This was rich with historical evidence. Another great one was “Researching AI technologies in 80’s Japanese Game Industry” delivered by Miyake who is involved in some very interesting preservation projects.

CIFAR Amii Summer Institute On AI And Society

Last week I attended the CIFAR and Amii Summer Institute on AI and Society. This brought together a group of faculty and new scholars to workshop ideas about AI, Ethics and Society. You can see  conference notes here on philosophi.ca. Some of the interventions that struck me included:

  • Rich Sutton talked about how AI is a way of trying to understand what it is to be human. He defined intelligence as the ability to achieve goals in the world. Reinforcement learning is a form of machine learning configured to achieve autonomous AI and is therefore more ambitious and harder, but also will get us closer to intelligence. RL uses value functions to map states to values; bots then try to maximize rewards (states that have value). It is a simplistic, but powerful idea about intelligence.
  • Jason Millar talked about autonomous vehicles and how right now mobility systems like Google Maps have only one criteria for planning a route for you, namely time to get there. He asked what it would look like to have other criteria like how difficult the driving would be, or the best view, or the least bumps. He wants the mobility systems being developed to be open to different values. These systems will become part of our mobility infrastructure.

After a day of talks, during which I gave a talk about the history of discussions about autonomy, we had a day and a half of workshops where groups formed and developed things. I was part of a team that developed a critique of the EU Guidelines for Trustworthy AI.

Conference notes for CSDH 2019

In early June I was at the Congress for the Humanities and Social Sciences. I took conference notes on the Canadian Society for Digital Humanities 2019 event and on the Canadian Game Studies Association conference, 2019. I was involved in a number of papers:

  • Exploring through Markup: Recovering COCOA. This paper looked at an experimental Voyant tool that allows one to use COCOA markup as a way of exploring a text in different ways. COCOA markup is a simple form of markup that was superseded by XML languages like those developed with the TEI. The paper recovered some of the history of markup and what we may have lost.

  • Designing for Sustainability: Maintaining TAPoR and Methodi.ca. This paper was presented by Holly Pickering and discussed the processes we have set up to maintain TAPoR and Methodi.ca.

  • Our team also had two posters, one on “Generative Ethics: Using AI to Generate” that showed a toy that generates statements about artificial intelligence and ethics. The other, “Discovering Digital Methods: An Exploration of Methodica for Humanists” showed what we are doing with Methodi.ca.

Facebook refused to delete an altered video of Nancy Pelosi. Would the same rule apply to Mark Zuckerberg?

‘Imagine this for a second…’ (2019) from Bill Posters on Vimeo.

A ‘deepfake’ of Zuckerberg was uploaded to Instagram and appears to show him delivering an ominous message

The issue of “deepfakes” is big on the internet after someone posted a slowed down video of Nancy Pelosi to make her look drunk and then, after Facebook didn’t take it down a group posted a fake Zuckerberg video. See  Facebook refused to delete an altered video of Nancy Pelosi. Would the same rule apply to Mark Zuckerberg? This video was created by artists Posters and Howe and is part of a series

While the Pelosi video was a crude hack, the Zuckerberg video used AI technology from Canny AI, a company that has developed tools for replacing dialogue in video (which has legitimate uses in localization of educational content, for example.) The artists provided a voice actor with a script and then the AI trained on existing video of Zuckerberg and that of the voice actor to morph Zuckerberg’s facial movements to match the actor’s.

What is interesting is that the Zuckerberg video is part of an installation called Spectre with a number of deliberate fakes that were exhibited at  a venue associated with the Sheffield Doc|Fest. Spectre, as the name suggests, both suggests how our data can be used to create ghost media of us, but also reminds us playfully of that fictional criminal organization that haunted James Bond. We are now being warned that real, but spectral organizations could haunt our democracy, messing with elections anonymously.

Amazon’s Home Surveillance Company Is Putting Suspected Petty Thieves in its Advertisements

Ring, Amazon’s doorbell company, posted a video of a woman suspected of a crime and asked users to call the cops with information.

VICE has a story about how Amazon’s Home Surveillance Company Is Putting Suspected Petty Thieves in its Advertisements. The story is that Ring took out an ad which showed suspicious behaviour. A woman who is presumably innocent until proven guilty is shown clearly in order to sell more alarm systems. This information came from the police.

Needless to say, it raises ethical issues around community policing. Ring has a “Neighbors” app that lets vigilantes report suspicious behaviour creating a form of digital neighbourhood watch. The article references a Motherboard article that suggests that such digital neighbourhood surveillance can lead to racism.

Beyond creating a “new neighborhood watch,” Amazon and Ring are normalizing the use of video surveillance and pitting neighbors against each other. Chris Gilliard, a professor of English at Macomb Community College who studies institutional tech policy, told Motherboard in a phone call that such a “crime and safety” focused platforms can actively reinforce racism.

All we need now is for there to be AI in the mix. Face recognition so you can identify anyone walking past your door.