The bot follows all women candidates in the election and uses some sort of AI or sentiment detection to identify nasty tweets aimed at them and then responds with a positive message from a collection crowdsources from the public. What isn’t clear is if the positive message is sent to the offending tweeter or just posted generally?
ParityBOT was developed by ParityYEG which is a collaboration between the Alberta Machine Intelligence Institute and scientist Kory Mathewson.
On the Humanist discussion list John Keating recommended the short video Slaughterbots that presents a plausible scenario where autonomous drones are used to target dissent using social media data. Watch it! It is well done and presents real issues in a credible short video.
While the short is really about autonomous weapons and the need to ban them, I note that one of ideas included is that dissent could be silenced by using social media to target people. The scenario imagines that university students who shared a dissenting video on social media have their data harvested (including images of their faces) and the drones target them using face recognition. Science fiction, but suggestive of how social media presence can be used for control.
Blanket no-punching policies are useless in a world full of terrible people with even worse ideas. That’s even more true in a world where robots now do those people’s bidding. Robots, like people, are not all the same. While K5 can’t threaten bodily harm, the data it collects can cause real problems, and the social position it puts us all in—ever suspicious, ever watched, ever worried about what might be seen—is just as scary. At the very least, it’s a relationship we should question, not blithely accept as K5 rolls by.
The question, “Is it OK to kick a robot?” is a good one that nicely brings the ethics of human-robot co-existence down to earth and onto our side walks. What sort of respect do the proliferation of robots and scooters deserve? How should we treat these stupid things when enter our everyday spaces?
Kieji Amano deserves a lot of credit for putting together the largest Replaying Japan programme ever. The folks at the Ritsumeikan Center for Games Studies should also be thanked for organizing the facilities for both conferences. They have established themselves as leaders in Japan in the field.
I gave two papers:
“The End of Pachinko” (given with Amano) looked at the decline of pachinko and traditional forms of gambling in the face of the legalization of casinos. It looked at different types of ends, like the ends of machines.
“Work Culture in Early Japanese Game Development” (with Amano, Okabe, Ly and Whistance-Smith) used text analysis of Szczepaniak’ series of interviews, the Untold History of Japanese Game Developers, as a starting point to look at themes like stress and gender.
The quality of the papers in both conferences was very high. I expect this of DiGRA, but it was great to see that Replaying Japan, which is more inclusive, it getting better and better. I was particularly impressed by some of the papers by our Japanese colleagues like a paper delivered by Kobayashi on the “Early History of Hobbyist Production Filed of Video Games and its Effect on Game Industries in Japan.” This was rich with historical evidence. Another great one was “Researching AI technologies in 80’s Japanese Game Industry” delivered by Miyake who is involved in some very interesting preservation projects.
Rich Sutton talked about how AI is a way of trying to understand what it is to be human. He defined intelligence as the ability to achieve goals in the world. Reinforcement learning is a form of machine learning configured to achieve autonomous AI and is therefore more ambitious and harder, but also will get us closer to intelligence. RL uses value functions to map states to values; bots then try to maximize rewards (states that have value). It is a simplistic, but powerful idea about intelligence.
Jason Millar talked about autonomous vehicles and how right now mobility systems like Google Maps have only one criteria for planning a route for you, namely time to get there. He asked what it would look like to have other criteria like how difficult the driving would be, or the best view, or the least bumps. He wants the mobility systems being developed to be open to different values. These systems will become part of our mobility infrastructure.
After a day of talks, during which I gave a talk about the history of discussions about autonomy, we had a day and a half of workshops where groups formed and developed things. I was part of a team that developed a critique of the EU Guidelines for Trustworthy AI.
Exploring through Markup: Recovering COCOA. This paper looked at an experimental Voyant tool that allows one to use COCOA markup as a way of exploring a text in different ways. COCOA markup is a simple form of markup that was superseded by XML languages like those developed with the TEI. The paper recovered some of the history of markup and what we may have lost.
Designing for Sustainability: Maintaining TAPoR and Methodi.ca. This paper was presented by Holly Pickering and discussed the processes we have set up to maintain TAPoR and Methodi.ca.
Our team also had two posters, one on “Generative Ethics: Using AI to Generate” that showed a toy that generates statements about artificial intelligence and ethics. The other, “Discovering Digital Methods: An Exploration of Methodica for Humanists” showed what we are doing with Methodi.ca.
While the Pelosi video was a crude hack, the Zuckerberg video used AI technology from Canny AI, a company that has developed tools for replacing dialogue in video (which has legitimate uses in localization of educational content, for example.) The artists provided a voice actor with a script and then the AI trained on existing video of Zuckerberg and that of the voice actor to morph Zuckerberg’s facial movements to match the actor’s.
What is interesting is that the Zuckerberg video is part of an installation called Spectre with a number of deliberate fakes that were exhibited at a venue associated with the Sheffield Doc|Fest. Spectre, as the name suggests, both suggests how our data can be used to create ghost media of us, but also reminds us playfully of that fictional criminal organization that haunted James Bond. We are now being warned that real, but spectral organizations could haunt our democracy, messing with elections anonymously.
Needless to say, it raises ethical issues around community policing. Ring has a “Neighbors” app that lets vigilantes report suspicious behaviour creating a form of digital neighbourhood watch. The article references a Motherboard article that suggests that such digital neighbourhood surveillance can lead to racism.
Beyond creating a “new neighborhood watch,” Amazon and Ring are normalizing the use of video surveillance and pitting neighbors against each other. Chris Gilliard, a professor of English at Macomb Community College who studies institutional tech policy, told Motherboard in a phone call that such a “crime and safety” focused platforms can actively reinforce racism.
All we need now is for there to be AI in the mix. Face recognition so you can identify anyone walking past your door.
The conference was opened by Reuben Quinn whose grandfather signed Treaty 6. He challenged us to think about what labels and labelling mean. Later Kim Tallbear challenged us to think about how we want the encounter with other intelligences to go. We don’t have a good track record of encountering the other and respecting intelligence. Now is the time to think about our positionality and to develop protocols for encounters. We should also be open to different forms of intelligence, not just ours.
Generative Adversarial Networks (GANs) analyze tens of thousands of images, learn from their features, and are trained with the aim to create new images that are undistinguishable from the original data source.
They also point out that many of the same concerns people have about AI art today were voiced about photography in the 19th century. Photography automated the image making business much as AIs are automating other tasks.
Can we use these GANs for other generative scholarship?