Joan Lippincott: Digital Learning Spaces

Henry Jenkins, the Director of the Comparative Media Studies Program at the Massachusetts Institute of Technology has written a whitepaper, Confronting the Challenges of Participatory Culture: Media Education for the 21st Century, for the MacArthur Foundation that talks about the challenges of dealing with students who are (or want to be) participating in creating culture. (See my previous entry for a link to a presentation by Jenkins.) Joan Lippincott from the Coalition for Networked Information gave a talk today about how we should think about the library, learning, and space for these NetGen students who are used to the participatory culture of the web. To summarize her discussion of the differences between us and the Net Generation based partly on Jenkins:

  • We tend to do things in serial (first this task and then that) whole the NetGen multitask.
  • We (especially in the Humanities) value privacy and solitary work while the NetGen like to work in teams.
  • We tend to value linear text while they value hyperlinked visual multimedia.
  • We value critical thinking while they value creative production.

Joan goes on to argue that to reach the Net Generation Libraries need to rethink their services and spaces. She showed images of new spaces and discussed some of what she has written about in Linking the Information Commons to Learning which is part of a book from EDUCAUSE, Learning Spaces. Two things stuck me:

  • Lack of Books. In most of the pictures shown of information commons there were no books! This certainly isn’t true when you look at the workstations of most students or faculty in their own spaces where books, papers, and computer are “mashed” together. Why then are information commons being set up apart from the books and periodicals? One wonders why libraries are building spaces that look more like what computing services should set up. Is it politics – libraries are doing what campus computing services failed to do? Joan, rightly I think, answered that these spaces are/should be set up in collaboration with people with technical skill (from computing) and that the idea is to connect students to content whether digital or print. Books should be there too or be at hand.
  • Lack of Faculty Coordination. While these spaces are popular with students (see Henning’s Final Report on a survey of learning commons), the deeper problem is integration into the curriculum. Individual faculty may take advantage of the changing services and spaces of the library, but I haven’t seen the deep coordination that sees courses across the curriculum changed. Faculty assume the library is a service unit that supports their teaching by having books on reserve. We don’t think of the library as a living space where students are talking through our assignments, collaborating and getting help with their essays. We don’t coordinate changes in how we teach with changes in space and service, but stumble upon new services and weave them into our courses if we have the time (and it does take time to change how you teach.)

So here are a couple of ideas:

  • Curated Distributions. We should think along the lines suggested in A world in three aisles, Gideon Lewis-Kraus’ fascinating discussion of the Prelingers’ personal curated library where materials are arranged in associative clusters based on a curatorial practice designed to encourage pursuing topics that cross traditional shelf distribution. Why not invite faculty to curate small collections of books to be distributed among the workstations of a commons where users can serindipitously come across them, wonder why they are there, and browse not just sites, but thematic collections of books?
  • Discovery Centres. Another approach would be to work with chairs and deans to identify key courses or sets of courses and then build spaces with faculty input that are designed for studying for those courses. The spaces would have a mix of meeting spaces optimized for tutorials in the course(s), groupwork spaces for the types of groups formed in the courses, print materials (like books and magazines) needed for the course, and electronic finding aids for online materials related to the course. These topical spaces would be centres for students in these courses to access relevant information, browse related materials, meet other students, and get help. A library could obviously only afford a limited number of these, which is why the idea would be to target stressful first and second year courses where chairs identify the need and opportunity for discovery centres.

Desk Set (1957)

Image of movie coverDesk Set (1957) is a Katherine Hepburn and Spencer Tracy movie about automation where Tracy, an engineer is brought in to automate the research department run by Bunny Watson (Hepburn.) There is a moment of interest to digital humanists when Tracy is showing off EMERAC:

Boss: Well there she is, EMERAC, the modern miracle …

Richard Sumner (Tracy): The purpose of this machine, of course, is to free the worker…

Bunny Watson (Hepburn): You can say that again…

Sumner: …to free the worker from the routine and repetitive tasks and liberate his time for more important work.

For example, you see all those books there … and the ones up there? Well, every fact in them has been fed into Emmy. What do you have there?

Operator: This is Hamlet

Boss: That’s Hamlet?

Operator: Yes the entire text.

Sumner: In code, of course… Now these little cards create electronic impulses which are accepted and retained by the machine so that in the future, if anyone calls up and wants a quotation from Hamlet the research worker types it into the machine here, Emmi goes to work, and the answer comes out here.

Boss: And it never makes a mistake.

Sumner: Well … Now that’s not entirely accurate. Emmy can make a mistake.

Bunny: Ha ha…

Sumner: But only if the human element makes the mistake first.

Boss: Tell me Bunny, has EMERAC been helping you any?

Bunny: Well frankly it hasn’t started to give yet. For the past two weeks we’ve been feeding it information. But I think you could safely say that it will provide more leisure for more people.

There is an image of EMERAC on Flickr.

bastwood.com: Aphex Face and transcoding

Image of Aphex Demonbastwood.com has a good page on images found in sound starting with the demon face in Aphex Twin’s “Windowlicker”. It turns out the demon is really an inverted version of the Twin himself. The site goes on to discuss how to find such images and how to create sound from images using common software.

There are some uses for image/video sonnification tools other than putting surprises in tunes. See the The vOICE synthetic vision software site which sells systems for the visually impaired.

Thanks to Alex for the link.

TAPoRware Word Cloud

We’ve been playing with ways to make text analysis tools like word clouds that don’t need parameters work automatically on loading a page. See TAPoRware Word Cloud documentation. Here is an example.


An alternate beginning to humanities computing

Reading Andrew Booth’s Mechanical Resolution of Linguistic Problems (1958) I came across some interesting passages about the beginnings of text computing that suggest an alternative to the canonical Roberto Busa story of origin. Booth (the primary author) starts the book with a “Historical Introduction” in which he alludes to Busa’s project as part of a list of linguistic problems that run parallel to the problems of machine translation:

In parallel with these (machine translation) problems are various others, sometimes of a higher, sometimes of a lower degree of sophistry. There is, for example the problem of the analysis of the frequency of occurence of words in a given text. … Another problem of the same generic type is that of constructing concordances for given texts, that is, lists, usually in alphabetic order, of the words in these texts, each word being accompanied by a set of page and line references to the place of its occurrence. … The interest at Birkbeck College in this field was chiefly engendered by some earlier research work on the Dialogues of Plato … Parallel work in this field has been carried out by the I.B.M. Corporation, and it appears that some of this work is now being put to practical use in the preparation of a concordance for the works of Thomas Aquinas.
A more involved application of the same sort is to the stylistic analysis of a work by purely mechanical means. (p. 5-6)

In Mechanical Resolutions he continues with a discussion of how to use computers to count words and to generate concordances. He has a chapter on the problem of Plato’s dialogues which seems to have been a set problem at that time and, of course, there are chapters on dictionaries and machine translation. He describes some experiments he did starting in the late 40s that suggest that Searle’s Chinese Room Argument of 1980 might have been based on real human simulations.

Although no machine was available at this time (1948), the ideas of Booth and Richens were extensively tested by the construction of limited dictionaries of the type envisaged. These were used by a human untutored in the languages concerned, who applied only those rules which could eventually be performed by a machine. The results of these early ‘translations’ were extremely odd, … (p. 2)

Did others run such simulations of computing with “untutored” humans in the early years when they didn’t have access to real systems? See also the PDF of Richens and Booth, Some Methods of Mechanized Translation.

As for Andrew D. Booth, he ended up in Canada working on French/English translation for the Hansard, the bilingual transcript of parlimentary debates. (Note that Bill Winder has also been working on these, but using them as source texts for bilingual collocations. ) Andrew and Kathleen Booth wrote a contribution on The Origins of MT (PDF) that describes his early encounters with pioneers of computing around the possibilities of machine translation starting in 1946.

We date realistic possibilities starting with two meetings held in 1946. The first was between Warren Weaver, Director of the Natural Sciences Division of the Rockefeller Foundation, and Norbert Wiener. The second was between Weaver and A.D. Booth in that same year. The Weaver-Wiener discussion centered on the extensive code-breaking activities carried out during the World War II. The argument ran as follows: decryption is simply the conversion of one set of “words”–the code–into a second set, the message. The discussion between Weaver and A.D. Booth on June 20, 1946, in New York identified the fact that the code-breaking process in no way resembled language translation because it was known a priori that the decrypting process must result in a unique output. (p. 25)

Booth seems to have successfully raised funds from the Nuffield Foundation for a computer at Birkbeck College at the University of London that was used by L. Brandwood for work on Plato, among others. In 1962 he and his wife migrated to Saskatchewan to work on bilingual translation and then to Lakehead in Ontario where they “continued with emphasis on the construction of a large dictionary and the use of statistical techniques in linguistic analysis” in 1972. They retired to British Columbia in 1978 as most sensible Canadians do.

In short, Andrew Booth seems to have been involved in the design of early computers in order to get systems that could do machine translation and that led him to support a variety of text processing projects including stylistic analysis and concording. His work has been picked up as important to the history of machine translation, but not for the history of humanities computing. Why is that?
In a 1960 paper on The future of automatic digital computers he concludes,

My feeling on all questions of input-output is, however, the less the better. The ideal use of a machine is not to produce masses of paper with which to encourage Parkinsonian administrators and to stifle human inventiveness, but to make all decisions on the basis of its own internal operations. Thus computers of the future will communicate directly with each other and human beings will only be called on to make those judgements in which aesthetic considerations are involved. (p. 360)

Epstein: Dialectics of “Hyper”

Mikhail Epstein Hyper in 20th Century Culture: The Dialectics of Transition From Modernism to Postmodernism (Postmodern Culture 6:2, 1996) explores “the intricate relationship of Modernism and Postmodernism as the two complementary aspects of one cultural paradigm which can be designated by the notion ‘hyper’ and which in the subsequent analysis will fall into the two connected categories, those of ‘super’ and ‘pseudo.'” (para 7) Epstein plays with “hyper” as a prefix meaning that excess that goes beyond a limit then reflecting back on itself. Modernist revolutions overturn the inherited forms in a search for the “super” which in their excess zeal pass a limit becoming simulations of themselves or “pseudo”. The hyper encloses both the modernist search for the super truth and the postmodernist reaction to the simulations of modernity. The postmodern play on the excess depends on the modernist move for matter to the point where it serves to heighten (another meaning of hyper) the super-modern. Super and pseudo thus become intertwined in the ironic hyper.

In the final analysis, every “super” phenomenon sooner or later reveals its own reverse side, its “pseudo.” Such is the peculiarly postmodernist dialectics of “hyper,” distinct from both Hegelian dialectics of comprehensive synthesis and Leftist dialectics of pure negation. It is the ironic dialectics of intensification-simulation, of “super” turned into “pseudo.” (para 60)

Epstein looks at different spheres where this hyper-unfolding takes place using the word “hyper-texuality” in a different sense than how it is usually used for electronic literature. For Epstein hypertextuality describes a parallel process that happened in Russia and in the West where first modernist literary movements (Russian Formalism and Anglo-American New Criticism) stripped away the historical, authorial, and biographical to understand the pure “litterariness” of literature. The purification of literature left only the text as something “wholly depednent on and even engendered by criticism.” (para 21) “Postmodernism emerged no sooner than the reality of text itself was understood as an illusionary projection of a critic’s semiotic power or, more pluralistically, any reader’s interpretative power (‘dissemination of meanings’).” (para 25)

Epstein quotes Baudrillard about the net of mass communication replacing reality with a hyperreality, but doesn’t explore how the hyper in his sense is connected to the excess of networked information. It is in another essay, “The Paradox of Acceleration” that we see a clue,

Each singular fact becomes history the moment it appears, preserved in audio, visual, and textual images. It is recorded on tape, photographed, stored in the memory of a computer. It would be more accurate to say that each fact is generated in the form of history.

Ultimately, inscription of the fact precedes the occurrence of the fact, prescribing the forms in which it will be recorded, represented, and reflected.” (p. 179)

The ironic tension of the modern and postmodern is magnified by the hyper-excess of automated inscription. The excess of information is deadening us to the human in history as an unfolding. We are in a baroque phase where the only thing valued is the hyper-excess itself. Excess of archiving, excess theory, excess of reference, excess of quotation, excess of material, excess of publication, excess of criticism, excess of attention … but no time.

What next? Will we see the burning of books or a “simple thinking” movement? How do people react to an oppressive excess?

The essay in PMC is excerpted from an essay, “The Dialectics of Hyper: From Modernism to Postmodernism.” in Russian Postmodernism; New Perspectives on Post-Soviet Culture. Ed. M. Epstein, A. Genis, and S. Vladiv-Glover. New York: Berghahn Books, 1999. p. 3-30.

The essay on acceleration is, “The Paradox of Acceleration.” also in Russian Postmodernism. p. 177-181.

Long Bets Now

Have you ever wanted to go on record with a prediction? Would you like put money (that goes to charity) on your prediction? The Long Bets Foundation lets you do just that. It is a (partial) spin-off of The Long Now Foundation where you can register and make long-term predictions (up to thousands of years, I believe.) The money bet and challenged goes to charity; all you get if you are right is credit and the choice of charity. An example prediction in the text analysis arena is:

Gregory W. Webster predicts: “That by 2020 a wearable device will be available that will use voice recognition capability and high-volume storage to monitor and index conversations you have or conversations which occur in your vicinity for later searching as supplemental memory.” (Prediction 16)

Some of the other predictions of interest to humanists are: 177 about print on demand, 179 about reading on digital devices, and 295 about a second renaissance.

The Long Bet has some interesting people making predictions and bets (a prediction becomes a bet when formally challenged) including Ray Kurzweil betting against Mitch Kapor that “By 2029 no computer – or “machine intelligence” – will have passed the Turing Test.” (Bet 1)

Just to make life interesting there is a prediction 137 that “The Long Bets Foundation will no longer exist in 2104.” 63% of the voters seem to agree!

International Network of Digital Humanities Centres

There is a call circulating to set up a International Network of Digital Humanities Centres which looks like a good thing. It is in part a response to the Cyberinfrastructure report. The initiatives they imagine such a network being involved in are:

  • workshops and training opportunities for faculty, staff, and students
  • developing collaborative teams that are, in effect, pre-positioned to apply for predictable multi-investigator, multi-disciplinary, multi-national funding opportunities, beginning with an upcoming RFP that invites applications for supercomputing in the humanities
  • exchanging information about tools development, best practices, organizational strategies, standards efforts, and new digital collections, through a digital humanities portal

Towards a pattern language for text analysis and visualization

One outcome of the iMatter meeting at Montreal was . I have started a white paper on TADA that tries to think towards a Pattern Language for Text Analysis and Visualization. This white paper is not the language or a catalogue of patterns, but an attempt to orient myself towards what such a pattern language would be and what the dangers of such a move would be.