Cybersecurity

The New York Times has a nice short video on cybersecurity which is increasingly an issue. One of the things they mention is how it was the USA and Israel that may have opened the Pandora’s box of cyberweapons when they used Stuxnet to damage Iran’s nuclear programme. By using a sophisticated worm first we both legitimized the use of cyberwar against other countries which one is not at war with, and we showed what could be done. This, at least, is the argument of a good book on Stuxnet, Countdown to Zero Day.

Now the problem is that the USA, while having good offensive capability, is also one of the most vulnerable countries because of the heavy use of information technology in all walks of life. How can we defend against the weapons we have let loose?

What is particularly worrisome is that cyberweapons are being designed so that they are hard to trace and subtly disruptive in ways that are short of all out war. We are seeing a new form of hot/cold war where countries harass each other electronically without actually declaring war and getting civilian input. After 2016 all democratic countries need to protect against electoral disruption which then puts democracies at a disadvantage over closed societies.

Making AI accountable easier said than done, says U of A expert

Geoff McMaster of the Folio (U of A’s news site) wrote a nice article about how Making AI accountable easier said than done, says U of A expert. The article quotes me on on accountability and artificial intelligence. What we didn’t really talk about is forms of accountability for automata including:

  • Explainability – Can someone get an explanation as to how and why an AI made a decision that affects them? If people can get an explanation that they can understand then they can presumably take remedial action and hold someone or some organization accountable.
  • Transparency – Is an automated decision making process fully transparent so that it can be tested, studied and critiqued? Transparency is often seen as a higher bar for an AI to meet than explainability.
  • Responsibility – This is the old computer ethics question that focuses on who can be held responsible if a computer or AI harms someone. Who or what is held to account?

In all these cases there is a presumption of process both to determine transparency/responsibility and to then punish or correct for problems. Otherwise people will have no real recourse.

Writing with the machine

“…it’s like writing with a deranged but very well-read parrot on your shoulder.”

Robin Sloan, author of Mr. Penumbra’s 24-Hour Bookstore, has been doing some interesting work with recursive neural nets in order to generate text. See Writing with the machine. He trained a machine on science fiction and then hooked it into a text editor so it can complete sentences. The New York Times has a nice story on Sloan’s experiments, Computer Stories: A.I. Is Beginning to Assist Novelists.

One wonders what it would be like if you trained it on your own writing. Would it help you be yourself or discourage you from rereading your prose?

 

Making AI accountable easier said than done, says U of A expert

The Folio has a story on the ethics of AI that quotes me with the title, Making AI accountable easier said than done, says U of A expert.

One of issues that interests me the most now is the history of this discussion. We tend to treat the ethics of AI as a new issue, but people have been thinking about how automation would affect people for some time. There have been textbooks for teaching Computer Ethics like that of Deborah G. Johnson since the 1980s. As part of research we did on how computer were presented in the news we found articles in the 1960s about how automation might put people out of work. They weren’t thinking of AI then, but the ethical and social effects that concerned people back then were similar. What few people discussed, however, was how automation affected different groups differently. Michele Landsberg wrote a prescient article on “Will Computer Replace the Working Girl?” in 1964 for the women’s section of The Globe and Mail that argued that is was women in the typing pools that were being put out of work. Likewise I suspect that some groups be more affected by AI than others and that we need to prepare for that.

Addressing the issue of how universities might prepare for the disruption of artificial intelligence is a good book, Robot-Proof: Higher Education in the Age of Artificial Intelligence by Joseph Aoun (MIT Press, 2017).

Instead of educating college students for jobs that are about to disappear under the rising tide of technology, twenty-first-century universities should liberate them from outdated career models and give them ownership of their own futures. They should equip them with the literacies and skills they need to thrive in this new economy defined by technology, as well as continue providing them with access to the learning they need to face the challenges of life in a diverse, global environment.

Skip the bus: this post-apocalyptic jaunt is the only New York tour you’ll ever need

Operation Jane Walk appropriates the hallmarks of an action roleplaying game – Tom Clancy’s The Division (2016), set in a barren New York City after a smallpox pandemic – for an intricately rendered tour that digs into the city’s history through virtual visits to some notable landmarks. Bouncing from Stuyvesant Town to the United Nations Headquarters and down the sewers, a dry-witted tour guide makes plain how NYC was shaped by the Second World War, an evolving economy and the ideological jousting between urban theorists such as Robert Moses and Jane Jacobs. Between stops, the guide segues into musical interludes and poetic musings, but doesn’t let us forget the need to brandish a weapon for self-defence. The result is a highly imaginative film that interrogates the increasingly thin lines between real and digital worlds – but it’s also just a damn good time.

Aeon has a great tour of New York using Tom Clancy’s The Division, Skip the bus: this post-apocalyptic jaunt is the only New York tour you’ll ever need. It looks like someone actually gives tours this way – a new form of urban tourism. What other cities could one do?

The oral history of the Hampsterdance: The twisted true story of one of the world’s first memes | CBC Arts

Th CBC has a nice (and long) oral history about Hampsterdance: The twisted true story of one of the world’s first memes. Deidre LaCarte created the original site on Geocities in 1998 as a challenge and it took off. As the CBC puts it, it was the original meme to take off. You can see the original here.

It becomes clear as one reads on that none of the assets of the site were original; they were all clipart or music taken from elsewhere. Nonetheless LaCarte and others were able to make some money on the success of the site.

I personally think the first viral internet meme was the Mrs. Fields (or Neiman Marcus) cookie recipe story that circulated by email. It was an urban legend about being billed $250 for a recipe by a Mrs. Fields store and then sharing that recipe. According to Snopes this legend has quite a history going back to a 1948 cookbook.

Letting neural networks be weird

Halloween Costume Names Generated by a Weird AI

Jingwei, a bright digital humanities student working as a research assistant, has been playing with generative AI approaches from aiweirdness.com – Letting neural networks be weird. Janelle Shane has made neural networks funny by using the to generate things like New My Little Ponies. Jingwei scraped titles of digital humanities conferences from various conference sites and trained and generated new titles just waiting to be proposed as papers:

  • The Catalogue of the Cultural Heritage Parts

  • Automatic European Pathworks and Indexte Corpus and Mullisian Descriptions

  • Minimal Intellectual tools and Actorical Normiels: The Case study of the Digital Humanities Classics

  • Automatic European Periodical Mexico: The Case of the Digital Hour

  • TEIviv Industics – Representation dans le perfect textbook

  • Conceptions of the Digital Homer Centre

  • Preserving Critical Computational App thinking in DH Languages

  • DH Potential Works: US Work Film Translation Science

  • Translation Text Mining and GiS 2.0

  • DH Facilitating the RIATI of the Digital Scholar

  • Shape Comparing Data Creating and Scholarly Edition

  • DH Federation of the Digital Humanities: The Network in the Halleni building and Web Study of Digital Humanities in the Hid-Cloudy

  • The First Web Study of Build: A “Digitie-Game as the Moreliency of the Digital Humanities: The Case study of the Digital Hour: The Scale Text Story Minimalism: the Case of Public Australian Recognition Translation and Puradopase

  • The Computational Text of Contemporary Corpora

  • The Social Network of Linguosation in Data Washingtone

  • Designing formation of Data visualization

  • The Computational Text of Context: The Case of the World War and Athngr across Theory

  • The Film Translation Text Center: The Context of the Cultural Hermental Peripherents

  • The Social Infrastructure  PPA: Artificial Data In a Digital Harl to Mexquise (1950-1936)

  • EMO Artificial Contributions of the Hauth Past Works of Warla Management Infriction

  • DAARRhK Platform for Data

  • Automatic Digital Harlocator and Scholar

  • Complex Networks of Computational Corpus

  • IMPArative Mining Trail with DH Portal

  • Pursour Auchese of the Social Flowchart of European Nation

  • The Stefanopology: The Digital Humanities

Burrows and Antonia Archives: Centre For 21st Century Humanities

What happens to old digital humanities projects? Most vanish without a trace. Some get archived like the work of John Burrows and others at the Centre For Literary And Linguistic Computing (CLLC). Dr. Alexis Antonia kept an archive of CLLC materials which is now available from the Centre For 21st Century Humanities.

Anatomy of an AI System

Anatomy of an AI System – The Amazon Echo as an anatomical map of human labor, data and planetary resources. By Kate Crawford and Vladan Joler (2018)

Kate Crawford and Vladan Joler have created a powerful infographic and web site, Anatomy of an AI System. The dark illustration and site are an essay that starts with the Amazon Echo and then sketches out the global anatomy of this apparently simple AI appliance. They do this by looking at where the materials come from, where the labour comes from (and goes), and the underlying infrastructure.

Put simply: each small moment of convenience – be it answering a question, turning on a light, or playing a song – requires a vast planetary network, fueled by the extraction of non-renewable materials, labor, and data.

The essay/visualization is a powerful example of how we can learn by critically examining the technologies around us.

Just as the Greek chimera was a mythological animal that was part lion, goat, snake and monster, the Echo user is simultaneously a consumer, a resource, a worker, and a product.

‎GoQueer Locative Game

‎Queer places are, by definition, sites of accretion, where stories, memories, and experiences are gathered. Queer place, in particular, is reliant on ephemeral histories, personal moments and memories. GoQueer intends to integrate these personal archives with places for you to discover.

I recently downloaded and started playing the iOS version of ‎GoQueer from the App Store. It is a locative game from my colleague Dr. Maureen Engel.

Engel reflected about this project in a talk on YouTube titled Go Queer: A Ludic, Locative Media Experiment. Engel nicely theorizes her game not once, but in a doubled set of reflections show how theorizing isn’t a step in project design, but continuous thinking-through.

You can also see an article reflecting on this game by the title, Perverting Play: Theorizing a Queer Game Mechanic.