From Humanist and then MIT News, Professor Emeritus Seymour Papert, pioneer of constructionist learning, dies at 88. Papert was Piaget’s student and thought about how computers could provide children a way to construct knowledge. Among other things he developed the Logo language that I learned at one point. He also collaborated with the LEGO folk on Mindstorms, named after his book by that title.
This is a story from early in the technological revolution, when the application was out searching for the hardware, from a time before the Internet, a time before the PC, before the chip, before the mainframe. From a time even before programming itself. (Winter 1999, 3)
Father Busa is rightly honoured as one of the first humanists to use computing for a humanities research task. He is considered the founder of humanities computing for his innovative application of information technology and for the considerable influence of his project and methods, not to mention his generosity to others. He did not only work out how use the information technology of the late 1940s and 1950s, but he pioneered a relationship with IBM around language engineering and with their support generously shared his knowledge widely. Ironically, while we have all heard his name and the origin story of his research into presence in Aquinas, we know relatively little about what actually occupied his time – the planning and implementation of what was for its time one of the major research computing projects, the Index Thomsticus.
This blog essay is an attempt to outline some of the features of the Index Thomisticus as a large-scale information technology project as a way of opening a discussion on the historiography of computing in the humanities. This essay follows from a two-day visit to the Busa Archives at the Università Cattolica del Sacro Cuore. This visit was made possible by Marco Carlo Passarotti who directs the “Index Thomisticus” Treebank project in CIRCSE (Centro Interdisciplinare di Ricerche per la Computerizzazione dei Segni dell’Espressione – Interdisciplinary Centre for Research into the Computerization of Expressive Signs) which evolved out of GIRCSE (Gruppo not Centro – or Group not Centre), the group that Father Busa helped form in the 1980s. Passarotti not only introduced me to the archives, he also helped correct this blog as he is himself an archive of stories and details. Growing up in Gallarate, his family knew Busa, he studied under Busa, he took over the project, and he is one of the few who can read Busa’s handwriting.
Original GIRCSE Plaque kept by Passarotti
Stephen Wolfram has written a nice long blog essay on Untangling the Tale of Ada Lovelace. He tackles the question of whether Ada really contributed or was overestimated. He provides a biography of both Ada and Babbage. He speculates about what they were like and could have been. He believes Ada saw the big picture in a way Babbage didn’t and was able to communicate it.
Ada Lovelace was an intelligent woman who became friends with Babbage (there’s zero evidence they were ever romantically involved). As something of a favor to Babbage, she wrote an exposition of the Analytical Engine, and in doing so she developed a more abstract understanding of it than Babbage had—and got a glimpse of the incredibly powerful idea of universal computation.
The essay reflects on what might have happened if Ada had not died prematurely. Wolfram thinks they would have finished the Analytical Engine and possibly explored building an electromechanical version.
We will never know what Ada could have become. Another Mary Somerville, famous Victorian expositor of science? A Steve-Jobs-like figure who would lead the vision of the Analytical Engine? Or an Alan Turing, understanding the abstract idea of universal computation?
That Ada touched what would become a defining intellectual idea of our time was good fortune. Babbage did not know what he had; Ada started to see glimpses and successfully described them.
The Storage Engine is a timeline of computer storage from the Computer History Museum. It is easy to navigate and goes back to Pliny and various analogue storage systems. The items are well documented with multiple images, contemporary documents, and current information. Lots of good historical information here.
The Computer History Museum seems to be doing a number of these technology history sites including one called The Silicon Engine with a timeline of semiconductors in computers.
An important book for anyone doing the history of computing is From Airline Reservations to Sonic the Hedgehog by Martin Campbell-Kelly. This book more or less invents the field of software history by outlining the important phases, sectors and sources. Other histories have focused on individual companies, heros, or periods; Campbell-Kelly tries to survey the history (at least up to 1995) and define what needs to be considered and what we don’t know. In particular he tries to correct the consumer view that the history of software is about Microsoft. To that end he spends a lot of time on mainframe software and the sorts of services like IBM CICS (Customer Information Control System) that allows ATMs and other systems to reliably communicate transactions.
Martin Campbell-Kelly in the first chapter outlines three phases to the history of software that also correspond to sectors of the industry:
- From mid 1950s, Software Contracting
- From mid 1960s, Corporate Software Products
- From late 1970s, Packaged mass-market software products
You can read an interesting exchange about the book here that reviews the book, criticizes it and gives Campbell-Kelly a chance to respond.
Bibliographic reference: Campbell-Kelly, M. (2003). From Airline Reservations to Sonic the Hedgehog: a History of the Software Industry. Cambridge, MA, MIT Press.
Reading Thomas P. Hughes book Rescuing Prometheus I came across a reference to Dr Mina S. Rees who, in different senior roles at the Office of Naval Research in the late 1940s and early 50s, played a role in promoting early computing research. This led me to her 1950 Science article The Federal Computing Machine Program (December 1950, Vol. 112, No. 2921, pp. 731-736), a terrific survey of the state of computing at the time that is both a pleasure to read and nicely captures the balance/promise of analogue and electronic machines at the time. I was particularly struck by the wry humour of the overview. For example, in the opening she talks about what she will not talk about in her overview, and jokes that,
For an adequate discourse on the military applications of automatically sequenced electronic computers, I direct you to recent Steve Canyon comic strips in which a wonderful electronic brain that could see and shoot down planes at great distances was saved from the totalitarian forces of evil. (p. 731)
The Steve Canyon comic in question is a “Mechanical Brain” story her audience would have recognized. (See this review of the Milton Caniff’s Steve Canyon 1950 compilation.) Interestingly (perhaps because she had read Jay Forrester’s reports about air defense), Whirlwind, one of the computers she mentions, went on to be developed into the SAGE system which was designed to semi-automatically, “see and shoot down planes at great distances”.
Rees’ humour, humility and prescience can also be seen in her concession that visual displays and interface are important to certain problems,
As one who has suspected from the beginning that all oscilloscope displays were manipulated by a little man standing in hiding near by, I am happy now to concede that in several of the problems we are now attacking the introduction of visual display equipment has substantial merit. (p. 732)
She recognized the value of a “broad point of view” that looked at computing as more than efficient number crunching. This article reminds us of how computing was understood differently in the 1940s and 1950s and thereby helps us reacquire a broad point of view on computing with some humour.
For a memorial biography of Dr Rees see the memorial here (PDF).
Historian of technology Thomas Haigh has written a nice reflection on the intersection of computing and the humanities, We Have Never Been Digital (PDF) (Communications of the ACM, 57:9, Sept 2014, 24-28). He gives a nice tour of the history of the idea that computers are revolutionary starting with Berkeley’s 1949 Giant Brains: Or Machines That Think. He talks about the shift to the “digital” locating it in the launch of Wired, Stewart Brand and Negroponte’s Being Digital. He rightly points out that the digital is not evenly distributed and that it has a material and analogue basis. Just as Latour argued that we have never been (entirely) modern, Haigh points out that we have never been and never will be entirely digital.
This leads to a critique of the “dated neologism” digital humanities. In a cute move he questions what makes humanists digital? Is it using email or building a web page? He rightly points out that the definition has been changing as the technology does, though I’m not sure that is a problem. The digital humanities should change – that is what makes disciplines vital. He also feels we get the mix of computing and the humanities wrong; that we should be using humanities methods to understand technology not the other way around.
There is a sense in which historians of information technology work at the intersection of computing and the humanities. Certainly we have attempted, with rather less success, to interest humanists in computing as an area of study. Yet our aim is, in a sense, the opposite of the digital humanists: we seek to apply the tools and methods of the humanities to the subject of computing…
On this I think he is right – that we should be doing both the study of computing through the lens of the humanities and experimenting with the uses of computing in the humanities. I would go further and suggest that one way to understand computing is to try it on that which you know and that is the distinctive contribution of the digital humanities. We don’t just “yack” about it, we try to “hack” it. We think-through technology in a way that should complement the philosophy and history of technology. Haigh should welcome the digital humanities or imagine what we could be rather than dismiss the field because we haven’t committed to only humanistic methods, however limited.
Haigh concludes with a “suspicion” I have been hearing since the 1990s – that the digital humanities will disappear (like all trends) leaving only real historians and other humanists using the tools appropriate to the original fields. He may be right, but as a historian he should ask why certain disciplines thrive and other don’t. I suspect that science and technology studies could suffer the same fate – the historians, sociologists, and philosophers could back to their homes and stop identifying with the interdisciplinary field. For that matter, what essential claim does any discipline have? Could history fade away because all of us do it, or statistics disappear because statistical techniques are used in other disciplines? Who needs math when everyone does it?
The use of computing in the other humanities is exactly why the digital humanities is thriving – we provide a trading zone for new methods and a place where they can be worked out across the concerns of other disciplines. Does each discipline have to work out how texts should be encoded for interchange and analysis or do we share enough to do it together under a rubric like computing in the humanities? As for changing methods – the methods definitive of the digital humanities that are discussed and traded will change as they get absorbed into other disciplines so … no, there isn’t a particular technology that is definitive of DH and that’s what other disciplines want – a collegial discipline from which to draw experimental methods. Why is it that the digital humanities are expected to be coherent, stable and definable in a way no other humanities discipline is?
Here I have to say that Matt Kirschenbaum has done us an unintentional disfavor by discussing the tactical use of “digital humanities” in English departments. He has led others to believe that there is something essentially mercenary or instrumental to the field that dirties it compared to the pure and uneconomical pursuit of truth to be found in science and technology studies, for example. The truth is that no discipline has ever been pure or entirely corrupt. STS has itself been the site of positioning at every university I’ve been at. It sounds from Haigh that STS has suffered the same trials of not being taken seriously by the big departments that humanities computing worried about for decades. Perhaps STS could partner with DH to develop a richer trading zone for ideas and techniques.
I should add that many of us are in DH not for tactical reasons, but because it is a better home to the thinking-through we believe is important than the disciplines we came from. I was visiting the University of Virginia in 2001-2 and participated in the NEH funded meetings to develop the MA in Digital Humanities. My memory is that when we discussed names for the programme it was to make the field accessible. We were choosing among imperfect names, none of which could ever communicate the possibilities we hoped for. At the end it was a choice as to what would best communicate to potential students what they could study.
Hacker Trips has an article about how Ted Nelson’s Xanadu finally gets released after 51 years (with Transclusion). The article describes a conference in Ted Nelson’s honour. At the end he is quoted to the effect,
To wind up his story, Ted Nelson stated that he was “dealt one of the best hands in history, and misplayed it to the hilt. [He] could have accomplished so much more. [He] was here 1st, and it’s all gone wrong. [He] believes this would be a very different world and better world if [he] had gotten leverage. The world has gone the wrong way.”
Nelson also announced a demo of a working version of Xanadu with transclusion. Open Xanadu is up at the Xanadu site.
From Sean and Boing Boing I got to Vintage computers and technology in Toronto. Derek Flack went into the Toronto Public Library’s archives and scanned some of the photographs they have of vintage computers. Some of the pictures are of control systems that are not really computers, but none-the-less, they are cool. This complements the research we are doing going through the Globe and Mail looking at what was being written about computers in the 50s to 70s.
I just finished Jame Gleick’s The Information: A History, a Theory, a Flood. One of the best books I’ve read in some time. Despite its length (527 pages including index, bibliography and notes) it doesn’t exhaust the subject pedantically. Many of the chapters hint and more things to think about. It also takes an unexpected trajectory. I expected it to go from Babbage and the telegraph to computers and then Engelbart and information extensions. Instead he works his way through physics and math. He looks at ideas about how to measure information and the materiality of information explaining the startling conclusion that “Forgetting takes work.” (p. 362)
The second to last chapter “News News Every Day” is as good an exploration of the issue of information overload as I have read. He suggests that the perception of “overload” comes from the metaphoric application of our understanding of electrical circuits as in “overloading the circuits.” If one believes the nervous system is like an electrical network then it is possible to overload the network with too much input. That, after all, is what Carr argues in The Shallows (though he uses cognitive science.) None-the-less electrical and now computing metaphors for the mind and nervous system are haunting the discussion about information. Is one overloaded with information when you turn a corner in the woods and see a new vista of lakes and mountains? What Gleick does is connect information overload with the insight that it is forgetting that is expensive. We feel overloaded because it takes work not to find information, but to filter out information. To have peaceful moment thinking you need to forget and that is expensive.
Gleick ends, as he should, with meaning. Information theory ignores meaning, but information is not informative unless it means something to someone at some time. Where does the meaning come from? Is it interconnectedness? Does the hypertext network (and tools that measure connectedness like Google) give meaning to the data? No, for Gleick it is human choice.
As ever, it is the choice that informs us (in the original sense of that word). Selecting the genuine takes work; then forgetting takes even more work. (p. 425)