Exploring through Markup: Recovering COCOA. This paper looked at an experimental Voyant tool that allows one to use COCOA markup as a way of exploring a text in different ways. COCOA markup is a simple form of markup that was superseded by XML languages like those developed with the TEI. The paper recovered some of the history of markup and what we may have lost.
Designing for Sustainability: Maintaining TAPoR and Methodi.ca. This paper was presented by Holly Pickering and discussed the processes we have set up to maintain TAPoR and Methodi.ca.
Our team also had two posters, one on “Generative Ethics: Using AI to Generate” that showed a toy that generates statements about artificial intelligence and ethics. The other, “Discovering Digital Methods: An Exploration of Methodica for Humanists” showed what we are doing with Methodi.ca.
While the Pelosi video was a crude hack, the Zuckerberg video used AI technology from Canny AI, a company that has developed tools for replacing dialogue in video (which has legitimate uses in localization of educational content, for example.) The artists provided a voice actor with a script and then the AI trained on existing video of Zuckerberg and that of the voice actor to morph Zuckerberg’s facial movements to match the actor’s.
What is interesting is that the Zuckerberg video is part of an installation called Spectre with a number of deliberate fakes that were exhibited at a venue associated with the Sheffield Doc|Fest. Spectre, as the name suggests, both suggests how our data can be used to create ghost media of us, but also reminds us playfully of that fictional criminal organization that haunted James Bond. We are now being warned that real, but spectral organizations could haunt our democracy, messing with elections anonymously.
Needless to say, it raises ethical issues around community policing. Ring has a “Neighbors” app that lets vigilantes report suspicious behaviour creating a form of digital neighbourhood watch. The article references a Motherboard article that suggests that such digital neighbourhood surveillance can lead to racism.
Beyond creating a “new neighborhood watch,” Amazon and Ring are normalizing the use of video surveillance and pitting neighbors against each other. Chris Gilliard, a professor of English at Macomb Community College who studies institutional tech policy, told Motherboard in a phone call that such a “crime and safety” focused platforms can actively reinforce racism.
All we need now is for there to be AI in the mix. Face recognition so you can identify anyone walking past your door.
The conference was opened by Reuben Quinn whose grandfather signed Treaty 6. He challenged us to think about what labels and labelling mean. Later Kim Tallbear challenged us to think about how we want the encounter with other intelligences to go. We don’t have a good track record of encountering the other and respecting intelligence. Now is the time to think about our positionality and to develop protocols for encounters. We should also be open to different forms of intelligence, not just ours.
Generative Adversarial Networks (GANs) analyze tens of thousands of images, learn from their features, and are trained with the aim to create new images that are undistinguishable from the original data source.
They also point out that many of the same concerns people have about AI art today were voiced about photography in the 19th century. Photography automated the image making business much as AIs are automating other tasks.
Can we use these GANs for other generative scholarship?
The research concludes that although welfare reform may be leading to cost savings for the Department of Human Services (DHS), substantial costs are being shifted to vulnerable customers and the community services that support them. It is they that are paying the price of welfare reform.
The law has not caught up. In the United States, the use of facial recognition is almost wholly unregulated.
The New York Times has an opinion piece by Sahil Chinoy on how (they) We Built a (Legal) Facial Recognition Machine for $60. They describe an inexpensive experiment they ran where they took footage of people walking past some cameras installed in Bryant Park and compared them to known people who work in the area (scraped from web sites of organizations that have offices in the neighborhood.) Everything they did used public resources that others could use. The cameras stream their footage here. Anyone can scrape the images. The image database they gathered came from public web sites. The software is a service (Amazon’s Rekognition?) The article asks us to imagine the resources available to law enforcement.
I’m intrigued by how this experiment by the New York Times. It is a form of design thinking where they have designed something to help us understand the implications of a technology rather than just writing about what others say. Or we could say it is a form of journalistic experimentation.
Why does facial recognition spook us? Is recognizing people something we feel is deeply human? Or is it the potential for recognition in all sorts of situations. Do we need to start guarding our faces?
Facial recognition is categorically different from other forms of surveillance, Mr. Hartzog said, and uniquely dangerous. Faces are hard to hide and can be observed from far away, unlike a fingerprint. Name and face databases of law-abiding citizens, like driver’s license records, already exist. And for the most part, facial recognition surveillance can be set up using cameras already on the streets.
This is one of a number of excellent articles by the New York Times that is part of their Privacy Project.
Are robots competing for your job?
Probably, but don’t count yourself out.
The New Yorker magazine has a great essay by Jill Lepore about whether Are Robots Competing for Your Job? (Feb. 25, 2019) The essay talks about the various predictions, including the prediction that R.I. (Remote Intelligence or global workers) will take your job too. The fear of robots is the other side of the coin of the fear of immigrants which raises questions about why we are panicking over jobs when unemployment is so low.
Misery likes a scapegoat: heads, blame machines; tails, foreigners. But is the present alarm warranted? Panic is not evidence of danger; it’s evidence of panic. Stoking fear of invading robots and of invading immigrants has been going on for a long time, and the predictions of disaster have, generally, been bananas. Oh, but this time it’s different, the robotomizers insist.
Lepore points out how many job categories have been lost only to be replaced by others which is why economists are apparently dismissive of the anxiety.
Some questions we should be asking include:
Who benefits from all these warnings about job loss?
How do these warnings function rhetorically? What else might they be saying? How are they interpretations of the past by futurists?
How is the panic about job losses tied to worries about immigration?
Today I learned about Pius Adesanmi who died in the recent Ethiopian Airlines crash. From all accounts he was an inspiring professor of English and African Studies at Carelton. You can hear him from a TEDxEuston talk embedded above. Or you can read from his collection of satirical essays titled Naija No Dey Carry Last: Thoughts on a Nation in Progress.
In the TEDx talk he makes a prescient point about new technologies,
We are undertakers. Man will always preside over the funeral of any piece of technology that pretends to replace him.
He connects this prediction about how all new technologies, including AI, will also pass on with a reflection on Africa as a place from which to understand technology.
And that is what Africa understands so well. Should Africa face forward? No. She understands that there will be man to preside over the funeral of these new innovations. She doesn’t need to face forward if she understand human agency. Africa is the forward that the rest of humanities must face.
We need this vision of/from Africa. It gets ahead of the ever returning hype cycle of new technologies. It imagines a position from which we escape the neverending discourse of disruptive innovation which limits our options before AI.