The New York Times and the National Film Board (of Canada) have collaborated on a great interactive A Short History of the Highrise. The interactive plays as a documentary that you can stop at any point to explore details. The director, Katerina Cizek, on the About page talks about their inspiration:
I was inspired by the ways storybooks have been reinvented for digital tablets like the iPad. We used rhymes to zip through history, and animation and interactivity to playfully revisit a stunning photographic collection and reinterpret great feats of engineering.
The New York Public Library has another cool digital project called the Building Inspector. They are crowdsourcing the training and correction of a building recognition tool that is combing through old maps. You see a portion of a map with red dots outlining a building and you click “Yes” (if the outline is correct), “No” (if it is wrong), and “Fix” (if it is close, but needs to be fixed.)
They also have a neat subtitle to the project, “Kill Time. Make History.”
Reading a collection of stories in the Atlantic about women and technology I came across a story about The Wedding Data: What Marriage Notices Say About Social Change. This article talks about Weding Crunchers – a database of New York Times wedding announcements since 1981 that you can search in an environment much like Google’s Ngram viewer. In the chart above you can see that I searched for different professions. Note how “teacher” takes off, probably because of the popularity of Teach for America.
I can’t help wondering if we are seeing the emergence of a genre of text visualization – the diachronic word viewer. This type of visualization depends on an associations between orthographic words (the actual words in texts) and concepts.
Textal is a moble app (for the iPhone) that lets you “explore the words used in your favourite book, document, website, or twitter stream.” It looks beautiful, but I can’t find it on the app store. I like the idea of having something like this for my iPad on which I read more and more.
In Dublin I heard DAH student Maura McDonnell present on Visual Music (her blog), which is her PdD research area. Visual Music is one term among many of experiments in light and sound and her blog is a nice collection of resources on this new media form.
I’ve been meaning to blog on the video circulating of Kurt Vonnegut talking about the Shape of Stories. He describes the curves followed by popular stories like “boy meets girl” and suggests computers could even understand such simple curves. In Lapham’s Quarterly you can read the text of this lecture with illustrations. See Kurt Vonnegut at the Blackboard. In this version he asks about the value of such systems, a question which could apply equally to computer generated visualization,
The question is, does this system I’ve devised help us in the evaluation of literature? Perhaps a real masterpiece cannot be crucified on a cross of this design. How about Hamlet?
He concludes that the system doesn’t work because the truth is ambiguous. We simply don’t know in complex works (like Hamlet) if news is good or bad. Good literature is open to interpretation.
But there’s a reason we recognize Hamlet as a masterpiece: it’s that Shakespeare told us the truth, and people so rarely tell us the truth in this rise and fall here [indicates blackboard]. The truth is, we know so little about life, we don’t really know what the good news is and what the bad news is.
Prism is the coolest idea I have come across in a long time. Coming from the University of Virginia Scholar’s Lab, Prism is a collaborative interpretation environment. Someone comes up with categories like “Rhetoric”, “Orientalism” and “Social Darwinism” for a text like Notes on the State of Virginia. Then people (with accounts, which you can get freely) go through and mark passages. This creates overlapping interpretative markup of the sort you used to get with COCOA in TACT, but unlike TACT, many people can do the interpretation – it can be crowdsourced.
They are planning some visualizations of the results including what look like the types of visualizations that TACT gave where you can see words distributed over tagged areas.
While I couldn’t attend, the Edmonton Dorkbot had a live coding event organized by Scott Smallwood. See Vadim Bulitko’s photos at Edmonton Dorkbot, Oct 11 (click the links to go to the YouTube videos).