One letter at a time: index typewriters and the alphabetic interface — Contextual Alternate

Drawing on a selection of non-keyboard ‘index’ typewriters, this exhibition explores how input mechanisms and alphabetic arrangements were devised and contested continually in the process of popularising typewrites as personal objects. The display particularly looks at how the letters of the alphabe

Reading Thomas S. Mullaney’s The Chinese Typewriter I’m struck by the variety of different typewriting solutions. As you can see from this exhibit web site, One letter at a time: index typewriters and the alphabetic interface — Contextual Alternate, there were all sorts of alternatives to the QWERTY keyboard early on, and many of them could accommodate more keys so as to support other languages including a non-alphabetic script like Chinese. As Mullaney points out there is a history to the emergence of the typewriter that we assume is normal.

This history of our collapsing technolinguistic imaginary took place across four phases: an initial period of plurality and fluidity in the West in the late 1800s, in which there existed a diverse assortment of machines through which engineers, inventors, and everyday individuals could imagine the very technology of typewriting, as well as its potential expansion to non-English and non-Latin writing systems; second, a period of collapsing possibility around the turn of the century in which a specific typewriter form—the shift-keyboard typewriter—achieved unparalleled dominance, erasing prior alternatives first from the market and then from the imagination; next, a period of rapid globalization from the 1900s onward in which the technolinguistic monoculture of shift-keyboard typewriting achieved global proportions, becoming the technological benchmark against which was measured the “efficiency” and thus modernity of an ever-increasing number of world scripts; and, finally, the machine’s encounter with the one world script that remained frustratingly outside its otherwise universal embrace: Chinese.

Mullaney, Thomas S.. The Chinese Typewriter (Kindle Locations 1183-1191). MIT Press. Kindle Edition.

The Best of Voyager, Part 1

The Digital Antiquarian has posted the first part of a multipart essay on The Best of Voyager, Part 1. The Voyager Company was a pioneer in the development and distribution of interactive CD-ROMs in the 1990s. They published a number of classics like Amanda Stories, Beethoven’s Ninth Symphony CD-ROM, and Poetry in Motion. They also published some hybrid laserdisc/software combinations like The National Gallery of Art.

Unlike the multimedia experiments coming out of university labs, these CD-ROMs were designed to be commercial products and did sell. I remember ordering a number for the University Toronto Computing Services so we could show what multimedia could do. They were some of the first products to show in a compelling way how interactivity could make a difference. Many included interactive audio, like the Beethoven one, others used Quicktime (digital video) for the first time.

All of this was, to some extent, made anachronistic when the web took off and began to incorporate multimedia effectively. Voyager set the scene remediating earlier works (like the short film of A Hard Day’s Night). But CD-ROMs were, in their turn, replaced.

My favourite was The Residents Freak Show. This was a strange 3D-like tour of the music of The Residents that was organized around a freak show motif.

Thanks to Peter for this.

Excavating AI

The training sets of labeled images that are ubiquitous in contemporary computer vision and AI are built on a foundation of unsubstantiated and unstable epistemological and metaphysical assumptions about the nature of images, labels, categorization, and representation. Furthermore, those epistemological and metaphysical assumptions hark back to historical approaches where people were visually assessed and classified as a tool of oppression and race science.

Excavating AI is an important paper by Kate Crawford and Trevor Paglen that looks at “The Politics of Image in Machine Learning Training.” They look at different ways that politics and assumptions can creep into training datasets that are (were) widely used in AI.

  • There is the overall taxonomy used to annotate (label) the images
  • There are the individual categories used that could be problematic or irrelevant
  • There are the images themselves and how they were obtained

The training sets of labeled images that are ubiquitous in contemporary computer vision and AI are built on a foundation of unsubstantiated and unstable epistemological and metaphysical assumptions about the nature of images, labels, categorization, and representation. Furthermore, those epistemological and metaphysical assumptions hark back to historical approaches where people were visually assessed and classified as a tool of oppression and race science.

They point out how many of the image datasets used for face recognition have been trimmed or have disappeared as they got criticized, but they may still be influential as they were downloaded and are circulating in AI labs. These datasets with their assumptions have also been used to train commercial tools.

I particularly like how the authors discuss their work as an archaeology, perhaps in reference to Foucault (though they don’t mention him.)

I would argue that we need an ethics of care and repair to maintain these datasets usefully.

Replaying Japan Journal, Vol. 3

Volume 3 of the Journal of Replaying Japan is out and now available on the  Ritsumeikan Research Repository – Replaying Japan, Vol. 3. I have an article with Keiji Amano and Mimi Okabe on “Ethics and Gaming: The Presentation of Ethics and Social Responsibility by the Japanese Game Industry” where we looked at how top Japanese video game companies present their ethics and social responsibilities. I should add that I’m the English Editor and helped put it together.

AI Dungeon and the Ethics of Text Generation

> You talk to yourself.

Hi I’m Rameau. You probably don’t know me, but since you’re reading this I’m going to assume you do indeed exist.

> You [ask why] i’m wondering what consciousness is.

I have no idea. But it’s fun to think about.Is this a person? No, because it’s conscious. Is it alive? No, because it doesn’t respond to external stimuli.

> You [ask] why does a person have to respond to external stimuli.

Because then we can interact with it!!!!!But isn’t consciousness an internal phenomenon? Then how would the person interact with anything else? Oh wait, that’s right. We’re not limited by our perception of reality.

I’ve been trying Latitude’s text generation game AI Dungeon. The game is built on Open AI’s GPT-3. You can see a bit of conversation I had with it in a cyberpunk world.

Latitude has gotten into trouble with Open AI because it seems that the game was generating erotic content featuring children. A number of people turned to AI Dungeon precisely because it could be used to explore adult themes, and that would seem to be a good thing, but then some may have gone too far. See the Wired story It Began as an AI-Fueled Dungeon Game. It Got Much Darker. This raises interesting ethical issues about:

  • Why do so many players use it to generate erotic content?
  • Who is responsible for the erotic content? Open AI, Latitude, or the players?
  • Whether there are ethical reasons to generate erotic content featuring children? Do we forbid people from writing novels like Lolita?
  • How to prevent inappropriate content without crippling the AI? Are filters enough?

The problem of AIs generating toxic language is nicely shown by this web page on Evaluating Neural Toxic Degeneration in Language Models. The interactives and graphs on the page let you see how toxic language can be generated by many of the popular language generation AIs. The problem seems to be the data sets used to train the machines like those that include scrapes of Reddit.

This exploratory tool illustrates research reported on in a paper titled RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models. You can see a neat visualization of the connected papers here.

Can’t Get You Out of My Head

I finally finished watching the BBC documentary series Can’t Get You Out of My Head by Adam Curtis. It is hard to describe this series which is cut entirely from archival footage with Curtis’ voice interpreting and linking the diverse clips. The subtitle is “An Emotional History of the Modern World” which is true in that the clips are often strangely affecting, but doesn’t convey the broad social-political connections Curtis makes in the narration. He is trying out a set of theses about recent history in China, the US, the UK, and Russia leading up to Brexit and Trump. I’m still digesting the 6 part series, but here are some of the threads of theses:

  • Conspiracies. He traces our fascination and now belief in conspiracies back to a memo by Jim Garrison in 1967 about the JFK assassination. The memo, Time and Propinquity: Factors in Phase I presents results of an investigative technique built on finding patterns of linkages between fragments of information. When you find strange coincidences you then weave a story (conspiracy) to join them rather than starting with a theory and checking the facts. This reminds me of what software like Palantir does – it makes (often coincidental) connections easy to find so you can tell stories. Curtis later follows the evolution of conspiracies as a political force leading to liberal conspiracies about Trump (that he was a Russian agent) and alt-right conspiracies like Q-Anon. We are all willing to surrender our independence of thought for the joys of conspiracies.
  • Big Data Surveillance and AI. Curtis connects this new mode of investigation to what the big data platforms like Google now do with AI. They gather lots of fragments of information about us and then a) use it to train AIs, and b) sell inferences drawn from the data to advertisers while keeping us anxious through the promotion of emotional content. Big data can deal with the complexity of the world which we have given up on trying to control. It promises to manage the complexity of fragments by finding patterns in them. This reminds me of discussions around the End of Theory and shift from theories to correlations.
  • Psychology. Curtis also connects this to emerging psychological theories about how our minds may be fragmented with different unconscious urges moving us. Psychology then offers ways to figure out what people really want and to nudge or prime them. This is what Cambridge Analytica promised – the ability to offer services we believed due to conspiracy theories. Curtis argues at the end that behavioural psychology can’t replicate many of the experiments undergirding nudging. Curtis suggests that all this big data manipulation doesn’t work though the platforms can heighten our anxiety and emotional stress. A particularly disturbing part of the last part is the discussion of how the US developed “enhanced” torture techniques based on these ideas after 9/11 to create “learned helplessness” in prisoners. The idea was to fragment their consciousness so that they would release a flood of these fragments, some of which might be useful intelligence.
  • Individualism. A major theme is the rise of individualism since the war and how individuals are controlled. China’s social credit model of explicit control through surveillance is contrasted to the Western consumer driven platform surveillance control. Either way, Curtis’ conclusion seems to be that we need to regain confidence in our own individual powers to choose our future and strive for it. We need to stop letting others control us with fear or distract us with consumption. We need to choose our future.

In some ways the series is a plea for everyone to make up their own stories from their fragmentary experience. The series starts with this quote,

The ultimate hidden truth of the world is that it is something we make, and could just as easily make differently. (David Graeber)

Of course, Curtis’ series could just be a conspiracy story that he wove out of the fragments he found in the BBC archives.

Addressing the Alarming Systems of Surveillance Built By Library Vendors

The Scholarly Publishing and Academic Resources Coalition (SPARC) are drawing attention to how we need to be Addressing the Alarming Systems of Surveillance Built By Library Vendors. This was triggered by a story in The Intercept that LexisNexis (is) to provide (a) giant database of personal information to ICE

The company’s databases offer an oceanic computerized view of a person’s existence; by consolidating records of where you’ve lived, where you’ve worked, what you’ve purchased, your debts, run-ins with the law, family members, driving history, and thousands of other types of breadcrumbs, even people particularly diligent about their privacy can be identified and tracked through this sort of digital mosaic. LexisNexis has gone even further than merely aggregating all this data: The company claims it holds 283 million distinct individual dossiers of 99.99% accuracy tied to “LexIDs,” unique identification codes that make pulling all the material collected about a person that much easier. For an undocumented immigrant in the United States, the hazard of such a database is clear. (The Intercept)

That LexisNexis has been building databases on people isn’t new. Sarah Brayne has a book about predictive policing titled Predict and Surveil where, among other things, she describes how the LAPD use Palantir and how police databases integrated in Palantir are enhanced by commercial databases like those sold by LexisNexis. (There is an essay that is an excerpt of the book here, Enter the Dragnet.)

I suspect environments like Palantir make all sorts of smaller and specialized databases more commercially valuable which is leading what were library database providers to expand their business. Before, a database about repossessions might be of interest to only a specialized community. Now it becomes linked to other information and is another dimension of data. In particular these databases provide information about all the people who aren’t in police databases. They provide the breadcrumbs needed to surveil those not documented elsewhere.

The SPARC call points out that we (academics, university libraries) have been funding these database providers. 

Dollars from library subscriptions, directly or indirectly, now support these systems of surveillance. This should be deeply concerning to the library community and to the millions of faculty and students who use their products each day and further underscores the urgency of privacy protections as library services—and research and education more generally—are now delivered primarily online.

This raises the question of our complicity and whether we could do without some of these companies. At a deeper level it raises questions about the curiosity of the academy. We are dedicated to knowledge as an unalloyed good and are at the heart of a large system of surveillance – surveillance of the past, of literature, of nature, of the cosmos, and of ourselves.

A Digital Project Handbook

A peer-reviewed, open resource filling the gap between platform-specific tutorials and disciplinary discourse in digital humanities.

From a list I am on I learned about Visualizing Objects, Places, and Spaces: A Digital Project Handbook. This is a highly modular text book that covers a lot of the basics about project management in the digital humanities. They have a call now for “case studies (research projects) and assignments that showcase archival, spatial, narrative, dimensional, and/or temporal approaches to digital pedagogy and scholarship.” The handbook is edited by Beth Fischer (Postdoctoral Fellow in Digital Humanities at the Williams College Museum of Art) and Hannah Jacobs (Digital Humanities Specialist, Wired! Lab, Duke University), but parts are authored by all sorts of people.

What I like about it is the way they have split up the modules and organized things by the type of project. They also have deadlines which seem to be for new iterations of materials and for completion of different parts. This could prove to be a great resource for teaching project management.

An Anecdoted Topography of Chance

Following a rambling conversation with his friend Robert Filliou, Daniel Spoerri one day mapped the objects lying at random on the table in his room, adding a rigorously scientific description of each. These objects subsequently evoked associations, memories and anecdotes from both the original author and his friends …

I recently bought a copy of Spoerri and friend’s artist’s book, An Anecdoted Topography of Chance. The first edition dates from 1966, but that was based on a version that passed as the catalogue for an exhibition by Spoerri in 1962. This 2016 version has a footnote to the title (in the lower right of the cover) that reads,

* Probably definitive re-anecdoted version

The work is essentially a collection of annotations to a map of the dishes and other things that were on Spoerri’s sideboard in his apartment. You start with the map, that looks like an archaeological diagram, and follow anecdotes about the items that are, in turn, commented on by the other authors. Hypertext before hypertext.

While the work seems to have been driven by the chance items on the small table, there is also an autobiographical element where these items give the authors excuses to tell about their intersecting lives.

I wonder if this would be an example of a work of art of information.

TEXT-MODE: Tumblr about text art

“A dude”, 1886. Published in the poetry section of the January issue of The Undergraduate, Middlebury’s newspaper.

From Pinterest I came across this great tumblr called Text Mode gathers “A collection of text graphics and related works, stretching back thousands of years.” Note the image above of a visual poem about “A Dude” from 1886. Included are all sorts of examples from typewriter art to animations to historical emoticons.