Rescue Tenure from the Monograph

“Rescue Tenure From the Tyranny of the Monograph” by Lindsay Waters in The Chronicle of Higher Education argues that we are spewing out too many second-rate books as we force new scholars to publish one or two to get tenure. His remedy is to return to a few excellent essays for tenure and to publish fewer books that are full of “gusto” (accessible and moving to a larger audience.)
The realities of the pressures to get tenure are unlikely to change, so I doubt the community can easily change course, but what if the form in which early publishing took place were changed? What if blogs, wikis, discussion list participation, and other forms of social/network writing were assessed. Early in a career is when academics should be writing with and for others in order to establish their network of ideas. Books can come later out of what has been tested in the creative common.
How would one assess quality (and quantity) for such writing? I can think of some bibliometric methods (Google as tenure tool), but they would be crude and easy to manipulate. Ideas?
Continue reading Rescue Tenure from the Monograph

The Listening Post

The Listening Post is a networked installation that culls text from online and displays them and synthesizes them.
This looks anticipates a project on the sonification of text that I am working on with Bill Farkas who has developed some cool sonification systems.

txtkit – Visual Text Mining

txtkit – Visual Text Mining Tool is a Mac OS 10.3 networked application that lets you visualize and interact with texts through a command line interface and visualizations. The visualizations, if I understand them, weave the text and user behaviour together with information about other users. It produces some of the most beautiful visualizations I have seen for a while.

See also their Related Links.

Error Correction

In a paper I gave in Georgia I picked up on a comment by Negroponte in Being Digital to the effect that error correction is one of the fundamental advantages of digital (vs analog) data. Automatic error correction makes lossless copying and transmission possible. Digital Revolution (III) – Error Correction Codes is the third in a set of Feature Column essays on the “Digital Revolution.” (The other two are on Barcodes and Compression Codes and Technologies.)
To exaggerate, we can say that error correction makes computing possible. Without error correction we could not automate computing reliably enough to use it outside the lab. Something as simple as moving data off a hard-drive across the bus to the CPU can only happen at high speeds and repeatedly if we can build systems that guarantee what-was-sent-is-what-was-got.
There are exceptions, and here is where it can get interesting. Certain types of data can still be useful when corrupted, for example images, audio, video and text – namely media data – while others if corrupted become useless. Data that is meant for output to a human for interpretation needs less error correction (and can be compressed using lossy compression) while still remaining usable. Could such media have a surplus of information from which we can correct for loss that is the analog equivalent to symbolic error correction?
Another way to put this is that there is always noise. Data is susceptible to noise when transmitted, when stored, and when copied.
Continue reading Error Correction