Mina S. Rees and Early Computers

Reading Thomas P. Hughes book Rescuing Prometheus I came across a reference to Dr Mina S. Rees who, in different senior roles at the Office of Naval Research in the late 1940s and early 50s, played a role in promoting early computing research. This led me to her 1950 Science article The Federal Computing Machine Program (December 1950, Vol. 112, No. 2921, pp. 731-736), a terrific survey of the state of computing at the time that is both a pleasure to read and nicely captures the balance/promise of analogue and electronic machines at the time. I was particularly struck by the wry humour of the overview. For example, in the opening she talks about what she will not talk about in her overview, and jokes that,

For an adequate discourse on the military applications of automatically sequenced electronic computers, I direct you to recent Steve Canyon comic strips in which a wonderful electronic brain that could see and shoot down planes at great distances was saved from the totalitarian forces of evil. (p. 731)

The Steve Canyon comic in question is a “Mechanical Brain” story her audience would have recognized. (See this review of the Milton Caniff’s Steve Canyon 1950 compilation.) Interestingly (perhaps because she had read Jay Forrester’s reports about air defense), Whirlwind, one of the computers she mentions, went on to be developed into the SAGE system which was designed to semi-automatically, “see and shoot down planes at great distances”.

Rees’ humour, humility and prescience can also be seen in her concession that visual displays and interface are important to certain problems,

As one who has suspected from the beginning that all oscilloscope displays were manipulated by a little man standing in hiding near by, I am happy now to concede that in several of the problems we are now attacking the introduction of visual display equipment has substantial merit. (p. 732)

She recognized the value of a “broad point of view” that looked at computing as more than efficient number crunching. This article reminds us of how computing was understood differently in the 1940s and 1950s and thereby helps us reacquire a broad point of view on computing with some humour.

For a memorial biography of Dr Rees see the memorial here (PDF).

We Have Never Been Digital

Historian of technology Thomas Haigh has written a nice reflection on the intersection of computing and the humanities, We Have Never Been Digital (PDF) (Communications of the ACM, 57:9, Sept 2014, 24-28). He gives a nice tour of the history of the idea that computers are revolutionary starting with Berkeley’s 1949 Giant Brains: Or Machines That Think. He talks about the shift to the “digital” locating it in the launch of Wired, Stewart Brand and Negroponte’s Being Digital. He rightly points out that the digital is not evenly distributed and that it has a material and analogue basis. Just as Latour argued that we have never been (entirely) modern, Haigh points out that we have never been and never will be entirely digital.

This leads to a critique of the “dated neologism” digital humanities. In a cute move he questions what makes humanists digital? Is it using email or building a web page? He rightly points out that the definition has been changing as the technology does, though I’m not sure that is a problem. The digital humanities should change – that is what makes disciplines vital. He also feels we get the mix of computing and the humanities wrong; that we should be using humanities methods to understand technology not the other way around.

There is a sense in which historians of information technology work at the intersection of computing and the humanities. Certainly we have attempted, with rather less success, to interest humanists in computing as an area of study. Yet our aim is, in a sense, the opposite of the digital humanists: we seek to apply the tools and methods of the humanities to the subject of computing…

On this I think he is right – that we should be doing both the study of computing through the lens of the humanities and experimenting with the uses of computing in the humanities. I would go further and suggest that one way to understand computing is to try it on that which you know and that is the distinctive contribution of the digital humanities. We don’t just “yack” about it, we try to “hack” it. We think-through technology in a way that should complement the philosophy and history of technology. Haigh should welcome the digital humanities or imagine what we could be rather than dismiss the field because we haven’t committed to only humanistic methods, however limited.

Haigh concludes with a “suspicion” I have been hearing since the 1990s – that the digital humanities will disappear (like all trends) leaving only real historians and other humanists using the tools appropriate to the original fields. He may be right, but as a historian he should ask why certain disciplines thrive and other don’t. I suspect that science and technology studies could suffer the same fate – the historians, sociologists, and philosophers could back to their homes and stop identifying with the interdisciplinary field. For that matter, what essential claim does any discipline have? Could history fade away because all of us do it, or statistics disappear because statistical techniques are used in other disciplines? Who needs math when everyone does it?

The use of computing in the other humanities is exactly why the digital humanities is thriving – we provide a trading zone for new methods and a place where they can be worked out across the concerns of other disciplines. Does each discipline have to work out how texts should be encoded for interchange and analysis or do we share enough to do it together under a rubric like computing in the humanities? As for changing methods – the methods definitive of the digital humanities that are discussed and traded will change as they get absorbed into other disciplines so … no, there isn’t a particular technology that is definitive of DH and that’s what other disciplines want – a collegial discipline from which to draw experimental methods. Why is it that the digital humanities are expected to be coherent, stable and definable in a way no other humanities discipline is?

Here I have to say that Matt Kirschenbaum has done us an unintentional disfavor by discussing the tactical use of “digital humanities” in English departments. He has led others to believe that there is something essentially mercenary or instrumental to the field that dirties it compared to the pure and uneconomical pursuit of truth to be found in science and technology studies, for example. The truth is that no discipline has ever been pure or entirely corrupt. STS has itself been the site of positioning at every university I’ve been at. It sounds from Haigh that STS has suffered the same trials of not being taken seriously by the big departments that humanities computing worried about for decades.  Perhaps STS could partner with DH to develop a richer trading zone for ideas and techniques.

I should add that many of us are in DH not for tactical reasons, but because it is a better home to the thinking-through we believe is important than the disciplines we came from. I was visiting the University of Virginia in 2001-2 and participated in the NEH funded meetings to develop the MA in Digital Humanities. My memory is that when we discussed names for the programme it was to make the field accessible. We were choosing among imperfect names, none of which could ever communicate the possibilities we hoped for. At the end it was a choice as to what would best communicate to potential students what they could study.

Xanadu Released

Screen shot of Xanadu

Hacker Trips has an article about how Ted Nelson’s Xanadu finally gets released after 51 years (with Transclusion). The article describes a conference in Ted Nelson’s honour. At the end he is quoted to the effect,

To wind up his story, Ted Nelson stated that he was “dealt one of the best hands in history, and misplayed it to the hilt. [He] could have accomplished so much more. [He] was here 1st, and it’s all gone wrong. [He] believes this would be a very different world and better world if [he] had gotten leverage. The world has gone the wrong way.”

Nelson also announced a demo of a working version of Xanadu with transclusion. Open Xanadu is up at the Xanadu site.

Vintage computers and technology in Toronto

From Sean and Boing Boing I got to Vintage computers and technology in Toronto. Derek Flack went into the Toronto Public Library’s archives and scanned some of the photographs they have of vintage computers. Some of the pictures are of control systems that are not really computers, but none-the-less, they are cool. This complements the research we are doing going through the Globe and Mail looking at what was being written about computers in the 50s to 70s.

The Information: A History, a Theory, a Flood | James Gleick

 

I just finished Jame Gleick’s The Information: A History, a Theory, a Flood. One of the best books I’ve read in some time. Despite its length (527 pages including index, bibliography and notes) it doesn’t exhaust the subject pedantically. Many of the chapters hint and more things to think about. It also takes an unexpected trajectory. I expected it to go from Babbage and the telegraph to computers and then Engelbart and information extensions. Instead he works his way through physics and math. He looks at ideas about how to measure information and the materiality of information explaining the startling conclusion that “Forgetting takes work.” (p. 362)

The second to last chapter “News News Every Day” is as good an exploration of the issue of information overload as I have read. He suggests that the perception of “overload” comes from the metaphoric application of our understanding of electrical circuits as in “overloading the circuits.” If one believes the nervous system is like an electrical network then it is possible to overload the network with too much input. That, after all, is what Carr argues in The Shallows (though he uses cognitive science.) None-the-less electrical and now computing metaphors for the mind and nervous system are haunting the discussion about information. Is one overloaded with information when you turn a corner in the woods and see a new vista of lakes and mountains? What Gleick does is connect information overload with the insight that it is forgetting that is expensive. We feel overloaded because it takes work not to find information, but to filter out information. To have peaceful moment thinking you need to forget and that is expensive.

Gleick ends, as he should, with meaning. Information theory ignores meaning, but information is not informative unless it means something to someone at some time. Where does the meaning come from? Is it interconnectedness? Does the hypertext network (and tools that measure connectedness like Google) give meaning to the data? No, for Gleick it is human choice.

As ever, it is the choice that informs us (in the original sense of that word). Selecting the genuine takes work; then forgetting takes even more work. (p. 425)

LOGICOMIX: philosophical comics


Sean lent me LOGICOMIX (Doxiadis, Apostolos, et al. New York: Bloomsbury, 2009), a graphic novel about Bertrand Russell and logic. The comic novel has a series of frames, the outer of which is a discussion between the real authors about logic and passion. They end up going to see Orestes and the novel ends with Athena’s judgement that brings the fates (passion and revenge) together with reason into wisdom in a city (Athens) through justice.

This frame echoes the main internal story which is Russell’s struggle to found math in logic. Much of the novel is a tour through the history of logic and important paradoxes. This tour runs in parallel with a biography of Russell. At all levels the novel seems to argue that you have to balance passion with reason. Russell tried to do it in his life, logicians discovered there was no logical foundation with paradoxes, and the graphic novel uses comic art to illustrate the story of logic (hence “logicomix”.) There is dog called “Manga” (which apparently in Greek means “cool dude”) who chases the owl (of reason.)

Does information wants to be free?

I’ve been thinking about the phrase “information wants to be free” by Steward Brand according to Chris Anderson in Free: the future of a radical price (see chapter 6). Brand originally saw this as a paradox between information want to be expensive and wanting to be free,

On the one hand information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other. (Brand, 1984)

Anderson in Chapter 6 of Free goes back to Brand to find out why he anthropomorphized information instead of saying “information should be free.”  (Brand felt it sounded better and that it focused attention on information, not people.)

While the phrase is memorable as it is (and because it ascribes intention to information) I suspect it would be more accurate to say that “information infrastructure is designed to promote free access.” The web was not designed to facilitate payment for information (as Ted Nelson imagined his Xanadu docuverse would be.) The design and economics of our infrastructure brought the cost of publishing and dissemination down to the cost of having an internet connection and an account on a server. That made it easy for all sorts of people who have non commercial reasons for sharing information to publish free information. It did not, however, mean that all information is available free. There are still people who resist sharing information for all sorts of reasons. In particular I am interested in indigenous communities that resist sharing their stories because that would turn them into information. Their stories are meant to be told in a context by someone who has rights to that story to others who are ready for the story. Posting it on the net decontextualizes the story and reduces it to mere information which in its freedom is neither really free or informative as the original telling.

For a useful web page on the phrase, its origin and uses of the aphorism see Roger Clarke’s ‘Information Wants to be Free’.

Internet Archive: Movies from the History of Computing

Willard McCarty on Humanist (Vol. 23, No. 116.) pointed to some early films about computing which are worth looking at. One is “The Information Machine” from IBM in 1956. It is an animated cartoon which presents the computer in a history of human information invention. It presents three functions for computing:

  1. Control or Balance (controlling complex systems)
  2. Design (helping us design and think)
  3. Simulation (modelling and predicting)

Another film is On Guard! The Story of SAGE also from IBM. This is about IBM’s contributions to air defense, specifically the SAGE system and the development of airborn modular computing. There is a fun part about the interactive operator terminal that visualizes data (as opposed to a TV that shows video.) The narrator actually talks about visualization (though not interactivity.

RFCs: How the Internet Got Its Rules

Stephen D. Crocker has written an Op-Ed on How the Internet Got Its Rules (April 6, 2009) about the Request for Comments or R.F.C.’s of the Internet. He looks back on writing the first R.F.C. 40 years ago as a student assigned to write up notes from a meeting. He chose the to call it a R.F.C. because:

What was supposed to be a simple chore turned out to be a nerve-racking project. Our intent was only to encourage others to chime in, but I worried we might sound as though we were making official decisions or asserting authority. In my mind, I was inciting the wrath of some prestigious professor at some phantom East Coast establishment. I was actually losing sleep over the whole thing, and when I finally tackled my first memo, which dealt with basic communication between two computers, it was in the wee hours of the morning.

Calling them R.F.C.’s set the tone for the consensual culture.

The early R.F.C.’s ranged from grand visions to mundane details, although the latter quickly became the most common. Less important than the content of those first documents was that they were available free of charge and anyone could write one. Instead of authority-based decision-making, we relied on a process we called “rough consensus and running code.” Everyone was welcome to propose ideas, and if enough people liked it and used it, the design became a standard.

Another feature was layering for independence that allowed people to build new technologies on older ones without asking permission.

Thanks to Dan Cohen on Twitter for this.

What Is Infrastructure?

I’ve written another essay. It seems to be what I do in Sundays. This time I’m trying to work out What Is Infrastructure and how it is different from supplies? The question is a way into trying to understand the role of big projects like TAPoR or Bamboo, both of which I am involved in (at very different levels.) As I thought about it I came to a couple of conclusions:

  • Defining things as infrastructure or cyberinfrastructure is a political move that tries to change how we frame services so we can propose different (and ongoing) ways of funding them. To be more blunt, defining a service as infrastructure moves it from something you ask for a limited grant for to something you ask for ongoing funding for (or something you set up a consortium to provide ongoing funding for.)
  • I can imagine a lighter way of weaving infrastructure out of existing industry provided stuff that we should take seriously.
  • Humanities research infrastructure should be public as in available to everyone and available internationally. Not only can the public participate in humanities research, but opening it up to the public is away of engaging them. Perhaps the relevance of the humanities lies not in their products, but in their participatory processes. Philosophy is not a science best done in a lab that will eventually produce a cure for ignorance. Philosophy is a love of wisdom we should share because we never owned it and we were never appointed its keepers.

Why not crowdsource the humanities? What would it take to make the (arts and) humanities the public disciplines? What sorts of infrastructure would engage the broader public?