Vintage computers and technology in Toronto

From Sean and Boing Boing I got to Vintage computers and technology in Toronto. Derek Flack went into the Toronto Public Library’s archives and scanned some of the photographs they have of vintage computers. Some of the pictures are of control systems that are not really computers, but none-the-less, they are cool. This complements the research we are doing going through the Globe and Mail looking at what was being written about computers in the 50s to 70s.

The Information: A History, a Theory, a Flood | James Gleick

 

I just finished Jame Gleick’s The Information: A History, a Theory, a Flood. One of the best books I’ve read in some time. Despite its length (527 pages including index, bibliography and notes) it doesn’t exhaust the subject pedantically. Many of the chapters hint and more things to think about. It also takes an unexpected trajectory. I expected it to go from Babbage and the telegraph to computers and then Engelbart and information extensions. Instead he works his way through physics and math. He looks at ideas about how to measure information and the materiality of information explaining the startling conclusion that “Forgetting takes work.” (p. 362)

The second to last chapter “News News Every Day” is as good an exploration of the issue of information overload as I have read. He suggests that the perception of “overload” comes from the metaphoric application of our understanding of electrical circuits as in “overloading the circuits.” If one believes the nervous system is like an electrical network then it is possible to overload the network with too much input. That, after all, is what Carr argues in The Shallows (though he uses cognitive science.) None-the-less electrical and now computing metaphors for the mind and nervous system are haunting the discussion about information. Is one overloaded with information when you turn a corner in the woods and see a new vista of lakes and mountains? What Gleick does is connect information overload with the insight that it is forgetting that is expensive. We feel overloaded because it takes work not to find information, but to filter out information. To have peaceful moment thinking you need to forget and that is expensive.

Gleick ends, as he should, with meaning. Information theory ignores meaning, but information is not informative unless it means something to someone at some time. Where does the meaning come from? Is it interconnectedness? Does the hypertext network (and tools that measure connectedness like Google) give meaning to the data? No, for Gleick it is human choice.

As ever, it is the choice that informs us (in the original sense of that word). Selecting the genuine takes work; then forgetting takes even more work. (p. 425)

LOGICOMIX: philosophical comics


Sean lent me LOGICOMIX (Doxiadis, Apostolos, et al. New York: Bloomsbury, 2009), a graphic novel about Bertrand Russell and logic. The comic novel has a series of frames, the outer of which is a discussion between the real authors about logic and passion. They end up going to see Orestes and the novel ends with Athena’s judgement that brings the fates (passion and revenge) together with reason into wisdom in a city (Athens) through justice.

This frame echoes the main internal story which is Russell’s struggle to found math in logic. Much of the novel is a tour through the history of logic and important paradoxes. This tour runs in parallel with a biography of Russell. At all levels the novel seems to argue that you have to balance passion with reason. Russell tried to do it in his life, logicians discovered there was no logical foundation with paradoxes, and the graphic novel uses comic art to illustrate the story of logic (hence “logicomix”.) There is dog called “Manga” (which apparently in Greek means “cool dude”) who chases the owl (of reason.)

Does information wants to be free?

I’ve been thinking about the phrase “information wants to be free” by Steward Brand according to Chris Anderson in Free: the future of a radical price (see chapter 6). Brand originally saw this as a paradox between information want to be expensive and wanting to be free,

On the one hand information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other. (Brand, 1984)

Anderson in Chapter 6 of Free goes back to Brand to find out why he anthropomorphized information instead of saying “information should be free.”  (Brand felt it sounded better and that it focused attention on information, not people.)

While the phrase is memorable as it is (and because it ascribes intention to information) I suspect it would be more accurate to say that “information infrastructure is designed to promote free access.” The web was not designed to facilitate payment for information (as Ted Nelson imagined his Xanadu docuverse would be.) The design and economics of our infrastructure brought the cost of publishing and dissemination down to the cost of having an internet connection and an account on a server. That made it easy for all sorts of people who have non commercial reasons for sharing information to publish free information. It did not, however, mean that all information is available free. There are still people who resist sharing information for all sorts of reasons. In particular I am interested in indigenous communities that resist sharing their stories because that would turn them into information. Their stories are meant to be told in a context by someone who has rights to that story to others who are ready for the story. Posting it on the net decontextualizes the story and reduces it to mere information which in its freedom is neither really free or informative as the original telling.

For a useful web page on the phrase, its origin and uses of the aphorism see Roger Clarke’s ‘Information Wants to be Free’.

Internet Archive: Movies from the History of Computing

Willard McCarty on Humanist (Vol. 23, No. 116.) pointed to some early films about computing which are worth looking at. One is “The Information Machine” from IBM in 1956. It is an animated cartoon which presents the computer in a history of human information invention. It presents three functions for computing:

  1. Control or Balance (controlling complex systems)
  2. Design (helping us design and think)
  3. Simulation (modelling and predicting)

Another film is On Guard! The Story of SAGE also from IBM. This is about IBM’s contributions to air defense, specifically the SAGE system and the development of airborn modular computing. There is a fun part about the interactive operator terminal that visualizes data (as opposed to a TV that shows video.) The narrator actually talks about visualization (though not interactivity.

RFCs: How the Internet Got Its Rules

Stephen D. Crocker has written an Op-Ed on How the Internet Got Its Rules (April 6, 2009) about the Request for Comments or R.F.C.’s of the Internet. He looks back on writing the first R.F.C. 40 years ago as a student assigned to write up notes from a meeting. He chose the to call it a R.F.C. because:

What was supposed to be a simple chore turned out to be a nerve-racking project. Our intent was only to encourage others to chime in, but I worried we might sound as though we were making official decisions or asserting authority. In my mind, I was inciting the wrath of some prestigious professor at some phantom East Coast establishment. I was actually losing sleep over the whole thing, and when I finally tackled my first memo, which dealt with basic communication between two computers, it was in the wee hours of the morning.

Calling them R.F.C.’s set the tone for the consensual culture.

The early R.F.C.’s ranged from grand visions to mundane details, although the latter quickly became the most common. Less important than the content of those first documents was that they were available free of charge and anyone could write one. Instead of authority-based decision-making, we relied on a process we called “rough consensus and running code.” Everyone was welcome to propose ideas, and if enough people liked it and used it, the design became a standard.

Another feature was layering for independence that allowed people to build new technologies on older ones without asking permission.

Thanks to Dan Cohen on Twitter for this.

What Is Infrastructure?

I’ve written another essay. It seems to be what I do in Sundays. This time I’m trying to work out What Is Infrastructure and how it is different from supplies? The question is a way into trying to understand the role of big projects like TAPoR or Bamboo, both of which I am involved in (at very different levels.) As I thought about it I came to a couple of conclusions:

  • Defining things as infrastructure or cyberinfrastructure is a political move that tries to change how we frame services so we can propose different (and ongoing) ways of funding them. To be more blunt, defining a service as infrastructure moves it from something you ask for a limited grant for to something you ask for ongoing funding for (or something you set up a consortium to provide ongoing funding for.)
  • I can imagine a lighter way of weaving infrastructure out of existing industry provided stuff that we should take seriously.
  • Humanities research infrastructure should be public as in available to everyone and available internationally. Not only can the public participate in humanities research, but opening it up to the public is away of engaging them. Perhaps the relevance of the humanities lies not in their products, but in their participatory processes. Philosophy is not a science best done in a lab that will eventually produce a cure for ignorance. Philosophy is a love of wisdom we should share because we never owned it and we were never appointed its keepers.

Why not crowdsource the humanities? What would it take to make the (arts and) humanities the public disciplines? What sorts of infrastructure would engage the broader public?

Zielinski: Deep Time of the Media

Image of Cover Siegried Zielinski’s Deep Time of the Media (translated by Gloria Custance, Cambridge, MA, MIT Press, c2006) is an unusual book that pokes into the lost histories of media technologies in order to start “toward an archaeology of hearing and seeing by technical means” (as the subtitle goes.) Zielinski starts by talking about the usual linear history of media technologies that recovers what predicts what we believe is important. This is the Vannevar Bush, Ted Nelson type of history. Zielinski looks away from the well known precursurs towards the magical and tries to recover those moments of diversity of technologies. (He writes about Gould’s idea of punctuated equilibrium as a model for media technologies – ie. that we have bursts of diversity and then periods of conformity.)

I’m interested in his idea of the magical, because I think it is important to the culture of computing. The magical for Zielinski is not a primitive precursor of science or efficiency. The magical is an attitude towards possibility that finds spectacle in technology. Zielinksi has a series of conclusions that sort of sketch out how to preserve the magical:

Developed media worlds need artistic, scientific, technical, and magical challenges.  (p. 255)

Cultivating dramaturgies of difference is an effective remedy against the increasing ergonomization of the technical media wolrds that is taking place under the banner of ostensible linear progress. (p. 259)

Establishing effective connections with the peripheries, without attempting to integrate these into the centers, can help to maintain the worlds of the media in a state that is open and transformable. (p. 261)

The most important precondition for guaranteeing the continued existence of relatively power-free spaces in media worlds is to refrain from all claims to occupying the center. (p. 269)

The problem with imagining media worlds that intervene, of analyzing and developing them creatively, is not so much finding an appropriate framework but rather allowing them to develop with and within time. (p. 270)

Kairos poetry in media worlds is potentially an efficacious tool against expropriation of the moment. (p. 272)

Artistic praxis in media worlds is a matter of extravagant expenditure. Ist priviledged location are not palaces but open laboratories. (p. 276)

Avatars consume as much electricity as Brazilians

Nick Carr, in his blog Rough Type, has a post, Avatars consume as much electricity as Brazilians, where he sets out to calculate the amount of electricity consumed by a Second Life avatar, which ends up being the amount consumed by the average Brazilian. The comments are fascinating as people debate his math and Second Life folk corrent the calculations about servers (vs. CPUs) used. The point still stands that the average internet user is consuming a lot of electricity – not only does her PC consume, but the servers she connects to (Second Life, Google …) are consuming electricity. Is this ecologically sustainable? Is the use of energy when computing hidden from us because we don’t have exhaust coming out of our PCs (the green-house gases come out of the coal-fired electricity plants far from us)?

The Mind Tool: Edward Vanhoutte’s Blog

Edware Vanhoutte, who has done some of the best work on the history of humanities computing (though much is not yet published), has started a blog. In his first entry, The Mind Tool: Edward Vanhoutte’s Blog, he summarizes early text books that were used to teach humanities computing. It would be interesting to look at how these 70s and 80s books conceive of the computer and how they differ from the 50s and 60s work like that of Booth.