Joan Lippincott: Digital Learning Spaces

Henry Jenkins, the Director of the Comparative Media Studies Program at the Massachusetts Institute of Technology has written a whitepaper, Confronting the Challenges of Participatory Culture: Media Education for the 21st Century, for the MacArthur Foundation that talks about the challenges of dealing with students who are (or want to be) participating in creating culture. (See my previous entry for a link to a presentation by Jenkins.) Joan Lippincott from the Coalition for Networked Information gave a talk today about how we should think about the library, learning, and space for these NetGen students who are used to the participatory culture of the web. To summarize her discussion of the differences between us and the Net Generation based partly on Jenkins:

  • We tend to do things in serial (first this task and then that) whole the NetGen multitask.
  • We (especially in the Humanities) value privacy and solitary work while the NetGen like to work in teams.
  • We tend to value linear text while they value hyperlinked visual multimedia.
  • We value critical thinking while they value creative production.

Joan goes on to argue that to reach the Net Generation Libraries need to rethink their services and spaces. She showed images of new spaces and discussed some of what she has written about in Linking the Information Commons to Learning which is part of a book from EDUCAUSE, Learning Spaces. Two things stuck me:

  • Lack of Books. In most of the pictures shown of information commons there were no books! This certainly isn’t true when you look at the workstations of most students or faculty in their own spaces where books, papers, and computer are “mashed” together. Why then are information commons being set up apart from the books and periodicals? One wonders why libraries are building spaces that look more like what computing services should set up. Is it politics – libraries are doing what campus computing services failed to do? Joan, rightly I think, answered that these spaces are/should be set up in collaboration with people with technical skill (from computing) and that the idea is to connect students to content whether digital or print. Books should be there too or be at hand.
  • Lack of Faculty Coordination. While these spaces are popular with students (see Henning’s Final Report on a survey of learning commons), the deeper problem is integration into the curriculum. Individual faculty may take advantage of the changing services and spaces of the library, but I haven’t seen the deep coordination that sees courses across the curriculum changed. Faculty assume the library is a service unit that supports their teaching by having books on reserve. We don’t think of the library as a living space where students are talking through our assignments, collaborating and getting help with their essays. We don’t coordinate changes in how we teach with changes in space and service, but stumble upon new services and weave them into our courses if we have the time (and it does take time to change how you teach.)

So here are a couple of ideas:

  • Curated Distributions. We should think along the lines suggested in A world in three aisles, Gideon Lewis-Kraus’ fascinating discussion of the Prelingers’ personal curated library where materials are arranged in associative clusters based on a curatorial practice designed to encourage pursuing topics that cross traditional shelf distribution. Why not invite faculty to curate small collections of books to be distributed among the workstations of a commons where users can serindipitously come across them, wonder why they are there, and browse not just sites, but thematic collections of books?
  • Discovery Centres. Another approach would be to work with chairs and deans to identify key courses or sets of courses and then build spaces with faculty input that are designed for studying for those courses. The spaces would have a mix of meeting spaces optimized for tutorials in the course(s), groupwork spaces for the types of groups formed in the courses, print materials (like books and magazines) needed for the course, and electronic finding aids for online materials related to the course. These topical spaces would be centres for students in these courses to access relevant information, browse related materials, meet other students, and get help. A library could obviously only afford a limited number of these, which is why the idea would be to target stressful first and second year courses where chairs identify the need and opportunity for discovery centres.

Epstein: Dialectics of “Hyper”

Mikhail Epstein Hyper in 20th Century Culture: The Dialectics of Transition From Modernism to Postmodernism (Postmodern Culture 6:2, 1996) explores “the intricate relationship of Modernism and Postmodernism as the two complementary aspects of one cultural paradigm which can be designated by the notion ‘hyper’ and which in the subsequent analysis will fall into the two connected categories, those of ‘super’ and ‘pseudo.'” (para 7) Epstein plays with “hyper” as a prefix meaning that excess that goes beyond a limit then reflecting back on itself. Modernist revolutions overturn the inherited forms in a search for the “super” which in their excess zeal pass a limit becoming simulations of themselves or “pseudo”. The hyper encloses both the modernist search for the super truth and the postmodernist reaction to the simulations of modernity. The postmodern play on the excess depends on the modernist move for matter to the point where it serves to heighten (another meaning of hyper) the super-modern. Super and pseudo thus become intertwined in the ironic hyper.

In the final analysis, every “super” phenomenon sooner or later reveals its own reverse side, its “pseudo.” Such is the peculiarly postmodernist dialectics of “hyper,” distinct from both Hegelian dialectics of comprehensive synthesis and Leftist dialectics of pure negation. It is the ironic dialectics of intensification-simulation, of “super” turned into “pseudo.” (para 60)

Epstein looks at different spheres where this hyper-unfolding takes place using the word “hyper-texuality” in a different sense than how it is usually used for electronic literature. For Epstein hypertextuality describes a parallel process that happened in Russia and in the West where first modernist literary movements (Russian Formalism and Anglo-American New Criticism) stripped away the historical, authorial, and biographical to understand the pure “litterariness” of literature. The purification of literature left only the text as something “wholly depednent on and even engendered by criticism.” (para 21) “Postmodernism emerged no sooner than the reality of text itself was understood as an illusionary projection of a critic’s semiotic power or, more pluralistically, any reader’s interpretative power (‘dissemination of meanings’).” (para 25)

Epstein quotes Baudrillard about the net of mass communication replacing reality with a hyperreality, but doesn’t explore how the hyper in his sense is connected to the excess of networked information. It is in another essay, “The Paradox of Acceleration” that we see a clue,

Each singular fact becomes history the moment it appears, preserved in audio, visual, and textual images. It is recorded on tape, photographed, stored in the memory of a computer. It would be more accurate to say that each fact is generated in the form of history.

Ultimately, inscription of the fact precedes the occurrence of the fact, prescribing the forms in which it will be recorded, represented, and reflected.” (p. 179)

The ironic tension of the modern and postmodern is magnified by the hyper-excess of automated inscription. The excess of information is deadening us to the human in history as an unfolding. We are in a baroque phase where the only thing valued is the hyper-excess itself. Excess of archiving, excess theory, excess of reference, excess of quotation, excess of material, excess of publication, excess of criticism, excess of attention … but no time.

What next? Will we see the burning of books or a “simple thinking” movement? How do people react to an oppressive excess?

The essay in PMC is excerpted from an essay, “The Dialectics of Hyper: From Modernism to Postmodernism.” in Russian Postmodernism; New Perspectives on Post-Soviet Culture. Ed. M. Epstein, A. Genis, and S. Vladiv-Glover. New York: Berghahn Books, 1999. p. 3-30.

The essay on acceleration is, “The Paradox of Acceleration.” also in Russian Postmodernism. p. 177-181.

Long Bets Now

Have you ever wanted to go on record with a prediction? Would you like put money (that goes to charity) on your prediction? The Long Bets Foundation lets you do just that. It is a (partial) spin-off of The Long Now Foundation where you can register and make long-term predictions (up to thousands of years, I believe.) The money bet and challenged goes to charity; all you get if you are right is credit and the choice of charity. An example prediction in the text analysis arena is:

Gregory W. Webster predicts: “That by 2020 a wearable device will be available that will use voice recognition capability and high-volume storage to monitor and index conversations you have or conversations which occur in your vicinity for later searching as supplemental memory.” (Prediction 16)

Some of the other predictions of interest to humanists are: 177 about print on demand, 179 about reading on digital devices, and 295 about a second renaissance.

The Long Bet has some interesting people making predictions and bets (a prediction becomes a bet when formally challenged) including Ray Kurzweil betting against Mitch Kapor that “By 2029 no computer – or “machine intelligence” – will have passed the Turing Test.” (Bet 1)

Just to make life interesting there is a prediction 137 that “The Long Bets Foundation will no longer exist in 2104.” 63% of the voters seem to agree!

Interactive Matter Meeting

iMatter LogoThis weekend I participated in an Interactive Matter (iMatter) meeting at Montreal. The meeting was to figure out next steps on the project after our SSHRC proposal was unsuccessful.

Lynn Hughes and Jane Tingley of Concordia organized meetings at and tours of some of new media organizations in Montreal including:

  • Hexagram where we saw the textile labs, robot palace, machine shops, rapid prototyping lab, computer-controlled looms and so on. Very impressive facilities and research projects.
  • OBORO, a new media artists centre with great video and sound facilities.
  • Fondation Daniel Langlois where we got a tour of the Centre for Research and Documentation (CR+D) which collects materials (including grey matter) about new media art. I was dissappointed to learn that, on the issue of new media preservation, they haven’t really advanced past the The Variable Media Network discussion published in Permanence Through Change in 2003. They are just storing digital things in a cold dark room for the moment and concentrating on documentation.

One thing that is clear is the critical mass of artists, independent game developers, historians, philosophers, and organizations in Montreal. Montreal even have a Cit?© Mltim?©dia where the city is revitalizing an old industrial quarter to be a multimedia incubator. This public investment in multimedia arts, technology, and organizations stands in contrast to the lack of interest in cultural industries elsewhere.

Where is the Semantic Web?

Semantic Web DiagramWhere is the Semantic Web? In the face of Web 2.0 hype, the semantic web meme seems to be struggling. Tim Berners-Lee, in the slides from a 2003 talk says there is “no such thing” as a killer-app for the semantic web, that “its the integration, stupid!” (slide 7 of 35.) The problem is that mashups are giving users usable integration now. The difference is that mashups are usually based around one large content portal like Flickr that then little sattelite tools feed off. The semantic web was a much more democratic idea of integration.

Google’s Peter Norvig is quoted in Google exec challenges Berners-Lee saying that there are three problems with the semantic web:

  • Incompetence: users don’t know how to use HTML in a standard way let alone RDF.
  • Competition: companies that are in a leadership position don’t like to use open standards that could benefit others, they like to control the standards to their advantage.
  • Trust: too many people try to trick systems to change the visibility of their pages (selling Viagra.)

In a 2006 Guardian report, Spread the word, and join it up, SA Mathieson quotes Berners-Lee to the effect that they (semantic web folk) haven’t shown useful stuff. The web of TBL was a case of less is more (compared to SGML and other hypertext systems), the semantic web may lose out to all the creative mashups that are less standardized and more useful.

IDC White Paper: The Digital Universe

Image of Report CoverIn an earlier blog I mentioned the IDC report, The Digital Universe, about the explosion of digital information. It was commissioned by EMC Corporation and is available free on their site, here. They also have a page on related information which includes a link to “Are You an Informationist?” and “The Inforati Files”.

The PDF of the IDC White Paper includes some interesting points:

  • Between 2006 and 2010, the information added annually to the digital universe will increase more than six fold from 161 exabytes to 988 exabytes.
  • Three major analog to digital conversions are powering this growth ‚Äì film to digital image capture, analog to digital voice, and analog to digital TV.
  • Images, captured by more than 1 billion devices in the world, from digital cameras and camera phones to medical scanners and security cameras, comprise the largest component of the digital universe. They are replicated over the Internet, on
    private organizational networks, by PCs and servers, in data centers, in digital TV broadcasts, and on digital projection movie screens. building automation and security migrates to IP networks, surveillance goes digital, and RFID and sensor networks
    proliferate.

Is it time to rewrite “The Work of Art in the Age of Mechanical Reproduction” to think about about “The Image in the Age of Networked Distribution”.

tiddlyspot

I blogged before about TiddlyWiki the amazing selfcontained (HTML, CSS and JavaScript) wiki in a web page. I’ve now come across tiddlyspot where you can create a server based TiddlyWiki that can be private or public.

I’m convinced that between services like tiddlyspot, Ning.com, Blogger.com and Flckr.com you can create a robust distributed web presence without needing an ISP. Push your content out into the world.

Wikipedia Issues

Can we trust the Wikipedia? The Guardian Unlimited (among others) has a story, Read me first: Oh, what a tangled web we weave when we practise to deceive (Seth Finkelstein, Mar. 8, 2007) about the latest Wikipedia scandal. A Wikipedia administrator who been posing as a tenured religion professor turns out to be a 24-year old with no advanced degrees. What is worse ,is that Wikipedia has hired him and has been promoting this administrator, suggesting him as someone for a New Yorker article that now has an editor’s disclaimer,

At the time of publication, neither we nor Wikipedia knew Essjay’s real name. Essjay’s entire Wikipedia life was conducted with only a user name; anonymity is common for Wikipedia admin-istrators and contributors, and he says that he feared personal retribution from those he had ruled against online. Essjay now says that his real name is Ryan Jordan, that he is twenty-four and holds no advanced degrees, and that he has never taught. He was recently hired by Wikia—a for-profit company affiliated with Wikipedia—as a “community manager”; he continues to hold his Wikipedia positions.

So what are the ethical issues and what does this mean for the quality of Wikipedia content? On the second question, an Editorial: Wikipedia with caution (Mar. 8, 2007, by Editorial Board) by the The Stanford Daily strikes the right note for me.

Most university-level students should be able to discern between Wikipedia and more reliable online sources like government databases and online periodicals. To be fair, some of Wikipedia’s entries are specific enough to be extremely valuable in studying or researching, but others are shallow, short, and occasionally completely inaccurate.

On the moral issue there is a tension between anonymity, which many people need online to perform their chosen roles, and deception. It could be argued that to preserve anonymity a Wikipedia administrator under the spotlight might have to mislead critics, but Essjay went too far, he tried to build his reputation through deception.

Time to learn your exabytes: Tech researchers calculate wide world of data

161 exabytes of information was generated last year according to a CBC.ca story, Time to learn your exabytes: Tech researchers calculate wide world of data by Brian Bergstein (March 5, 2007). That is way up from the estimate in How Much Information? 2003 that I blogged before. The study quotes John F. Gantz of IDC, but I can’t find the paper on the IDC site.

Wired News also has a version of the story, but again they link to the general IDC site.

Thanks to Matt amd Mike for this.