“Rescue Tenure From the Tyranny of the Monograph” by Lindsay Waters in The Chronicle of Higher Education argues that we are spewing out too many second-rate books as we force new scholars to publish one or two to get tenure. His remedy is to return to a few excellent essays for tenure and to publish fewer books that are full of “gusto” (accessible and moving to a larger audience.)
The realities of the pressures to get tenure are unlikely to change, so I doubt the community can easily change course, but what if the form in which early publishing took place were changed? What if blogs, wikis, discussion list participation, and other forms of social/network writing were assessed. Early in a career is when academics should be writing with and for others in order to establish their network of ideas. Books can come later out of what has been tested in the creative common.
How would one assess quality (and quantity) for such writing? I can think of some bibliometric methods (Google as tenure tool), but they would be crude and easy to manipulate. Ideas?
Continue reading Rescue Tenure from the Monograph
Purple Granular Addressability
In the comments to the entry on Tool and Technology I commented on the paragraph numbers in Chris Dent’s essays. He responded by pointing me to PurpleWiki Archive. Thee is also an essay on An Introduction to Purple which connects the granular addressability of Purple to Englebart.
Continue reading Purple Granular Addressability
Many 2 Many
Many-to-Many is a wiki on Social Software by Sebastian Paquet and friends. The design and idea parallel what we are doing with a private wiki on TAPoR.
What can we learn from it?
Continue reading Many 2 Many
The Listening Post
The Listening Post is a networked installation that culls text from online and displays them and synthesizes them.
This looks anticipates a project on the sonification of text that I am working on with Bill Farkas who has developed some cool sonification systems.
txtkit – Visual Text Mining
txtkit – Visual Text Mining Tool is a Mac OS 10.3 networked application that lets you visualize and interact with texts through a command line interface and visualizations. The visualizations, if I understand them, weave the text and user behaviour together with information about other users. It produces some of the most beautiful visualizations I have seen for a while.
See also their Related Links.
Miscellany and Interactivity
Lisbeth Klastrup:†Paradigms of interaction: conceptions and misconceptions of the field today is an good overview of the literature on interactivity.
I found the link for this from a blog by Jason Rhody at the University of Maryland who has a category archive on the interactive.
23¢ stories
23¢ stories is a delightful Flash site where you chose a postcard and can read or write a short story for that postcard. Its theme is New York and the stories provoked by pictures of the city.
Vademecum of digital art
vademecum de l’art numÈrique is a set of precepts for the critique of digital art from GRATIN (or Groupe de Recherches en Art et Technologies Interactives et/ou NumÈriques.)
Manifesto for Responsible Creative Computing v.0.3
Adrian Miles has posted A Manifesto For Responsible Creative Computing v.0.3 which should be read by all students who think that what they are going to get is tips for Photoshop.
I would extend it with:
- Creative computing extends curiosity about what could be.
- Interrupting, repurposing and transcoding are fundmental practices in digital literacy.
- Responsible computing is open.
Error Correction
In a paper I gave in Georgia I picked up on a comment by Negroponte in Being Digital to the effect that error correction is one of the fundamental advantages of digital (vs analog) data. Automatic error correction makes lossless copying and transmission possible. Digital Revolution (III) – Error Correction Codes is the third in a set of Feature Column essays on the “Digital Revolution.” (The other two are on Barcodes and Compression Codes and Technologies.)
To exaggerate, we can say that error correction makes computing possible. Without error correction we could not automate computing reliably enough to use it outside the lab. Something as simple as moving data off a hard-drive across the bus to the CPU can only happen at high speeds and repeatedly if we can build systems that guarantee what-was-sent-is-what-was-got.
There are exceptions, and here is where it can get interesting. Certain types of data can still be useful when corrupted, for example images, audio, video and text – namely media data – while others if corrupted become useless. Data that is meant for output to a human for interpretation needs less error correction (and can be compressed using lossy compression) while still remaining usable. Could such media have a surplus of information from which we can correct for loss that is the analog equivalent to symbolic error correction?
Another way to put this is that there is always noise. Data is susceptible to noise when transmitted, when stored, and when copied.
Continue reading Error Correction