A Collaborative Research Commons

Computing With The Infrastructure At Hand is an essay I wrote last weekend and have been editing that tries to think about how to do humanities computing if you don’t have grants and don’t have lots of support. I ended up trying to imagine a Collaborative Research Commons that imagines crowdsourcing digital humanities work.

While research as a gift economy may seem idealistic, I’ve been surprised by the extraordinary collaboration you get when you set up a structured way for people to contribute to a project. The Suda On Line project first showed (me at least) the potential for social and volunteer research. I’ve had luck with the Dictionary of Words in the Wild and the upcoming Day of Digital Humanities. This last project has yet to happen, but we have close to 100 participants signed up. My point is that we can imagine ways to research that don’t start with how to get a grant before we can talk.

Hall: Digitize This Book!

Cover of BookDigitize This Book! by Gary Hall is an interesting book at the intersection of cultural studies and humanities computing. The book seems to be addressed mostly to the cultural studies crowd arguing that “do cultural studies writers, thinkers, and practitioners not also need to experiment with ways of being ‘militant’ in a positive, innovative, creative, and constructive fashion in their own situations, institutions, and places of work?” (p. 206) The book is a sustained defense of the Cultural Studies e-Archive (CSeARCH) and other computing projects that Hall has initiated. He is trying to make space in cultural studies for projects we would recognize as humanities computing projects. To do this he argues against “transcendental politics” which assume a commitment to a particular political analysis in order to open room for actions, like starting an open archive, that cannot be demonstrated a-priori to be in support of capitalism or not. He ends the book with,

A fixed, pure and incorruptible institution could only be a violent, transcendental, totalizing, and totalitarian fantasy. One could even argue, after Derrida, that it is precisely the structurally open and undecidable nature of the situation – the fact that an institution or archive can be used to facilitate the forces of capitalism and globalization – that gives it ethical and political force. (p. 214)

Now I tend to shudder when I read phrases like “the forces of capitalism”, partly because I don’t understand the tradition of thought that takes such things as givens, but I don’t, as many colleagues do, believe we should therefore shun cultural studies or other forms of post-modern thought. Hall is interested in something important and that is the ethics and politics of digital work. To avoid discussing the ethics and politics of what we do in the university or as developers of digital works is to ascribe to a naive and unexamined ethic. Many avoid politics because the discourse has been politicized by second rate cultural studies folk who think shaming others for not being militant is a form of engagement. Hall is trying to open room for a form of politics beyond politics (or hyperpolitics) where we can act without knowing for sure what the consequences of our actions will be. That is the heart of ethics for me, acting (or not, which in turn is a form of acting) in the face of insufficient knowledge or ability. We always do things without being sure, ethics is knowing that and trying to deal thoughtfully with the ignorance.

Part of what I am saying here, then, is that certain forms, practices, and performances of new media – including many of those associated with open-access publishing and archiving – make us aware that we can no longer assume that we unproblematically know what the “political” is, or what sorts of interventions count as political. (p. 196)

Hall in his actions (like CSeARCH and the Open Humanities Press) and in his writing is trying to reach out to those in open access circles and in computing circles. We who are too buried in the techne should reach back.


You can find earlier versions of sections on CSeARCH like The Cultural Studies E-Archive Project (Original Pirate Copy), but, ironically, I can’t, find a copy of Digitize This Book!. No one has bothered to digitize it, no doubt due to the copyright notice as the beginning (p. iv) that states,

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. (p. iv)

Is there a contradiction between the injunction of the title (“Digitize This Book!”) and the copyright notice? What is the status of a title when it comes to rights? Should I digitize the book?

To be fair to Hall, the chapters of his previous book, Culture In Bits are available on CSeARCH and I assume he will make Digitize This Book! also available after a suitable interval. Perhaps someone knows him and can update me or point me to a digitized version already open.

Note: since writing this someone passed on a note to Gary Hall who kindly pointed me to online copies of other chapters. See my more recent blog entry with the links.


Hall makes an interesting move at the beginning of the book to position open access as a middle way for the university between the commercialization of the university and the (impossible elitist) return to whatever it is we think we were doing in the humanities in the good old days. I find it interesting that Hall believes “cultural studies has for some time now arguably been the means by which the university thinks about itself …” (p. 13). I’ve seen no evidence of this – cultural studies to me seems to want to position itself as outside the university critiquing it in the Socratic gadfly tradition rather than taking a role acknowledged by the university. It would probably come as a surprise to most university administrators that cultural studies is doing this for them and somehow represents the university’s institutionalized reflection. And therein lies the promise of Hall’s book – that there is type of creative activity we can all engage in, through which we can imagine the university by modeling it. We don’t need approval to set up open works. We can use the technology to become a way for the university to think about itself.

The Apostrophe: When they’re gone, theyre gone

Image of Street Sign without Apostrophe

Today’s Edmonton Journal reprinted a great article on the apostrophe following the widely reported decision of the Birmingham city council to stop using apostrophes in signs to save money (and not have do deal with pedants). The article, titled When theyre gone, well all be struggling with English traces some of the issues around the use of this “tadpole-shaped bundle of trouble.” It is worth pointing out that apostrophes are important to humanities computing. They are used, among other things, to mark absence. As the etymology of apostrophe suggests, they point away from the text, like a link, to something missing or passed over (in the sense that you don’t pronounce the missing sylable.) I’m less interested in the controversy over punctuation reform as what we can learn from punctuation about markup and the digital representation of text.

  • Punctuation is both part of the text and about the text. An apostrophe likewise is both part of the string of characters that we could call the text and yet points to something missing that the reader can fill in. This being of it and about it is what is difficult about the textual ontology of markup. See Markup: Buzzetti and Renear.
  • There is, as with most punctuation, a rhetorical dimension to the apostrophe. The word also refers to an exclamatory figure of speech when a speaker turns away and addresses some other imaginary audience often introduced by “O”. For example when Juliet turns away and addresses Romeo in Romeo and Juliet, Act II, Scene 2, “O Romeo, Romeo! wherefore art thou Romeo?” Likewise markup is apostrophic in that it typically addresses the machine rather than the reader. The tags of the anchor element in HTML, for example, are hidden from the reader and provide instruction for the machine should the reader click on the anchored text.
  • The single quotation mark on typewriters and early computers (7-bit ASCII) is one of the most overloaded characters. It is used for all sorts of different things from the single quotation mark, to the apostrophe, to the acute accent. Likewise markup overloads certain characters with special meaning through the general strategy of the escape character. The left angle bracket < gets loaded with a new function in XML markup languages: it serves not as a visible character, but as the mark that sets certain text aside to be interpreted differently as code. The text between angle brackets is turned aside to be interpreted by the machine. That we have escape characters or punctuation like an apostrophe that can turn the interpretation is somehow a important to computing and digital representation. That most keyboards have an Escape key indicates how important is the possibility of escape. It is the key to the possibility of interruption of the machine.

The apostrophe could be the punctuation mark of an escape that always was. It turns us away from using text to appreciate what is gone. They’re gone because they are always escaped. Theyre gone because they were never here.

Singularity University: Exponential Silliness 2.0?

Ray Kurzweil, who has been predicting “spiritual machines” (AI) for a while now, has been appointed Chancellor of the Singularity University. The Singularity University is based at the Nasa Ames and supported by Google (and Moses Znaimer, another visionary wannabe.) It’s mission is to focus on exponential advances leading to singularities where you get a paradigm shift. The Overview describes the aims of the University thus:

Singularity University aims to assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies and apply, focus and guide these tools to address humanity’s grand challenges.

The University thus seems dedicated to a particular, and questionable view of technological development which looks to a future of dramatic paradigm shifts triggered by these singularities. For example, the goal of the Academic Track “Future Studies & Forecasting” is “cultivating the student’s ‘exponential intuition’ — the ability to fully grasp the magnitude of possible outcomes likely to arise in specific domains.” No room here for modesty or skepticism.

The University is not really a University. It is more of an institute funded by commercial partners and providing intensive programs to graduate students and, importantly, executives. I’m surprised NASA is supporting it and legitimating something that seems a mix of science and science fiction – maybe they have too much room at their Ames campus and need some paying tenants. Perhaps in California such future speculation doesn’t seem so silly. I guess we will have to wait until about 2045 when the intelligence singularity is supposed to happen and see.

But what is the Singularity? The Wikipedia article on Technological Singularity quotes I. J. Good as describing the “intelligence explosion” that would constitute the singularity thus:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

The key for an intelligence singularity (as opposed to other types) is the recursive effect of the feedback loop when a machine is smart enough to improve itself. That is when we go from change (whether accelerating exponentially or not) to the independent evolution of intelligent machines. That is when they won’t need us to get better and we could become redundant. Such dramatic shifts are what the Singularity University prepares paying executives for and trains graduate students to accelerate.

It is easy to make fun of these ideas, but we need to be careful that we don’t end up confidently predicting that they can’t happen. Kurzweil is no fool and he bases his prediction on extrapolations of Moore’s law. Futurology will always be risky, but everyone has to do it to some degree. For that matter there do seem to be moments of accelerating technological change leading to dramatic paradigm shifts so we shouldn’t be so sure Kurzweil is wrong about the next one. I should add that I like the proposed interdisciplinarity of the Singularity University – the idea is that dramatic change or new knowledge can come from ideas that cross disciplines. This second organizing principle of the University has legs in this time of new and shifting disciplines. We need experiments like this. I just wish the Singularity University had had the courage to include academic tracks with the potential for critical engagement with the idea of an intelligence singularity. Why not a “History and Philosophy of Futurology” track that can call into question the very named premise of the University? After all, a real university should be built on an openness of mind we would call intelligence, not dogmatic certainty in a prediction.

Rebooting Computing Manifesto

On the subject of manifestos, one of my students pointed me to a project Peter Denning is leading that has a Rebooting Computing Manifesto. The project is sponsored by the National Science Foundation (of the USA) and is aimed at trying to reinvigorate computer science in the face of dramatic drops in enrollment.

It is a time of challenges for the computing field. We are tired of hearing that a computing professional is little more than a program coder or a system administrator; or that a college or graduate education is unnecessary; or that entering the computing field is a social death. We are dismayed that K-12 students, especially girls, have such a negative perception of computing. We are alarmed by reports that the innovation rate in our field has been declining and that enrollments in our degree programs have dropped 50% since 2001. Instead of the solo voice of the programmer, we would like to hear from the choir of mathematicians, engineers,and scientists who make up the bulk of our field.

I like how this is articulated as a challenge. I also like the can-do approach of gathering and coming up with ideas.

Journal of Virtual Worlds Research

Stan pointed me to the inaugural issue of the Journal of Virtual Worlds Research which has a number of fine articles.

  • “Cityspace, Cyberspace, and the Spatiology of Information” by Michael L. Benedikt is a reprint of a classic paper where he argues that,

    If we wish to reach deeply into the “nature” of “space itself” then, I believe we must allow into it, as it were, a substance of some sort: not the æther of nineteenth-century science perhaps, but a registering, tracing, questioning, remembering substance, spread as thinly as we can imagine, but present nonetheless, and definitive of here versus there because of how it pools, how it vibrates, how it scatters difference, différance. (p. 2)

    That substance is information. As he puts it later, “ultimately, the space in information and the information in space are one.” (p. 15)

  • “Toward a Definition of ‘Virtual Worlds'” by Mark W Bell is a short “Think Piece” defining “virtual worlds” as “A synchronous, persistent network of people, represented as avatars, facilitated by networked computers.” (p. 2)

These two pieces make an interesting contrast since Benedikt focuses on space and Bell manages to define virtual worlds without any reference to space. Benedikt calls for architects to engage in the design of virtual spaces while Bell focuses on the network of avatars – or the people within the space (and persistent time.)

Ever since the Gartner press release saying that “80 Percent of Active Internet Users Will Have A “Second Life” in the Virtual World by the End of 2011″ there has been a renewed interest in virtual worlds. My sense is that the 1990s interest in virtual reality was overblown and ultimately wrong in that people predicted we would be manipulating information inside virtual worlds with VR interfaces, data-gloves, headsets and so on. What has emerged instead is the proliferation of massive multiplayer online environments from games like World of Warcraft to social/creative spaces like Second Life. The headsets and torture apparatus of Lawnmower Man are gone, thank you!

Image of Book CoverSo … what is next? I’ve just finished Halting State by Charles Stross which is a near-future detective story set in Edinburgh where players can move their avatars from game to game in the Zone (something actually proposed by Linden Labs and IBM – see Lohr Free the Avatars – this reference is from the Messinger, Stroulia and Lyons article “A Typology of Virtual Worlds” in the JCWR.) What is more interesting is the way Stross imagines the overlay of virtual and real worlds. Everyone, including cops, wear glasses that provide augmented reality views on the world they walk through, including the ability to see people in their in-game avatar representation while, for example, at a trade fair. Stross does a imaginative job or weaving the virtual into everyday life. (If you like this book you should also read Accelerando – a great accelerating run through the artificial life as it leaves meat behind.)

Walter Ong Defining the Humanities for Congress

Man can even reflect upon his own earlier reflections as these are registered in books and elsewhere. All this is what ultimately the humanistic subjects deal with: Mankind’s life world, [page break] everything around and in men and women insofar as it affects or is affected by human consciousness.

The humanities–and I think we should get this clear–are not defined by being set against a field of science and technology presumably hostile to them. This is a fashionable, but essentially cheap, way of treated both fields.Walter Ong, “Defining the Humanities for Congress”

Browsing through the Notes from the Walter Ong Collection I came across an extended quote from Ong’s address to Congress from 1978 when he was president of the MLA. The address was in support of a resolution to authorize the President to call a conference on the humanities. Walter Ong quotes a definition of the humanities which he wants to play with,

The joint resolution introduced by Mr. Brademas on October 27, 1977, in the House of Representatives follows Congress description of 1965 in stating that:

“The term “humanities” includes, but is not limited to, the study of the following: language, both modern and classical; linguistics; literature; history, jurisprudence; philosophy; archeology; comparative religion; ethics; the history, criticism, theory, and practice of the arts; those aspects of the social sciences which have humanistic content and employ humanistic methods; and the study and application of the humanities to the human environment with particular attention to the relevance of the humanities to the current conditions of national life.”

He then goes on to conclude,

However, if the humanities need technology, technology also needs the humanities. For technology calls for more than technological thinking, as our present ecological crises remind us. Technology demands reflection on itself in relation to the entire human life world. Such reflection is no longer merely technology, it includes the humanities even though it needs to be done especially by scientists and technologies.

Ong, Walter J. “Statement of Rev. Walter J. Ong, Professor of English and Professor of Humanities in Psychiatry at St. Louis University; and President, Modern Language Association of America.” White House Conference on the Humanities. Joint Hearings before the Subcommittee on Select Education of the Committee on Education and Labor, House of Representatives, and the Subcommittee on Education, Arts and Humanities of the Committee on Human Resources, United States Senate, Ninety-Fifth Congress, First and Second Session, on H.J Res. 639 to Authorize the President to call a White House Conference on the Humanities. Washington: U.S. Government Printing Office, 1978. 684-88.

Cybersyn: Before the Coup, Chile Tried to Find the Right Software for Socialism

Image of Cybersyn Opsroom

In New York for my last f2f meeting of the MLA Committee on Information Technology I got a New York Times with an intriguing article about a Chilean management system, Cybersyn, titled Before the Coup, Chile Tried to Find the Right Software for Socialism.

Cybersyn was born in July 1971 when Fernando Flores, then a 28-year-old government technocrat, sent a letter to Mr. Beer seeking his help in organizing Mr. Allende’s economy by applying cybernetic concepts. Mr. Beer was excited by the prospect of being able to test his ideas.

He wanted to use the telex communications system – a network of teletypewriters – to gather data from factories on variables like daily output, energy use and labor “in real time,” and then use a computer to filter out the important pieces of economic information the government needed to make decisions.

Cybersyn was apparently semi-functional before the coup that overthrew Allende’s government and it was used to help manage around the small-business and truckers strike in 1972. I don’t think the Opsroom pictured above was ever fully operational, but visualization screens were important even if at the time they were hand-drawn slides that were projected rather than computer generated visualizations (see http://varnelis.net/blog/kazys/project_cybersyn on the chairs of the Opsroom.) Beer and the Chileans wanted Cybersyn to help them implement an alternative socialist economy that was managed in real time rather than “free” and chaotic or planned in the heavy handed way of most socialist economies of the time.

Rooting around, I found a good article about Cybersyn and the English visionary designer Stafford Beer from 2003 in the Guardian by Andy Beckett, Santiago Dreaming. It turns out that Beer gave the Massey Lectures in 1971 and they have been reprinted by Anansi as Designing Freedom. He also moved part-time to Toronto in the 80s where his last partner, Dr. Allenna Leonard of Metaphorum still resides. He died in 2002.

Another interesting thread is Fernando Flores who was the political lead of Cybersyn and the person that recruited Beer for the project. After the coup, Flores went to the US and got a Ph.D. in Computer Science collaborating with Terry Winograd, and being influenced by Maturana, also Chilean. That’s right – the Flores of Understanding Computers and Cognition. He is now back in Chile as a senator and supports various projects there.

The common thread is that Beer, Flores and Maturana all seem interested in viable systems in different spheres. They were applying cybernetics.

Dreyfus: Alchemy and Artificial Intelligence

Willard in Humanist pointed us towards an interesting RAND Paper by Hubert L. Drefus from 1965, Alchemy and Artificial Intelligence which suggests that artificial intelligence research is like alchemy – initial success has led to it being oversold when the fundamental paradigm is wrong.

Alchemists were so successful in distilling quicksilver from what seemed to be diret, that after several hundred years of fruitless effort to convert lead into gold they still refused to believe that on the chemical level one cannot transmute metals. To avoid the fate of the alchemists, it is time we asked where we stand. Now, before we invest more time and money on the information-processing level, we should ask whether the protocols of human subjects suggest that comptuer language is appropriate for analyzing human behaviour. Is an exhaustive analysis of human intelligent behavior into discrete and determinate operations possible? Is an approximate analysis of human intelligent behavior in such digital terms probable? The answer to both these questions seems to be, “No.”

In this paper Dreyfus leverages the lack of progress after people like H. A. Simon in 1957 predicted the extraordinary. Dreyfus does more than make fun of the hype, he uses it to question what AI research might achieve at all and to think about intelligence.

Now that we are 50 years after Simon’s predictions things are more complicated. We do have chess playing machines that are better players than humans. (Drefus points out how the early machines being hyped were really stupid chess players.) We do have machines that can recognize complex patterns and recognize speech. We do have better machine translation. It may be going slowly, but research is moving forward. Perhaps the paradigm of the mind as a machine is wrong, but thinking about it that way and trying to model intelligent behaviour is getting results. What then do we make of the alchemical insult. Is it too easy to call magical thinking those projects that are ambitious and make the mistake of predicting success? Having recently read Siegried Zielinski’s Deep Time of the Media, I’m finding myself more sympathetic of magical projects that promise to transmute data into intelligence. Impossible … probably, but that is no reason not to try.

To paraphrase the third of (recently died) Arthur C. Clarke’s three laws of prediction:

“Any sufficiently magical proposal should be indistinguishable from research.”

This obviously applies to grant proposals.