Perseus Digital Library Diachronic View is a YouTube video put together by Mihaela as part of a project we are working on as part of INKE. We started asking about the evolution of digital humanities interfaces which then led us to asking if there were projects that have been around long enough that their interfaces may have changed. This led us to the Perseus project which existed before the web. Using the Internet Archive and other sources we tried to reconstitute a history of major versions of the interface from the first HyperCard interface. We then created this video to show the evolution. We are now collaborating with Perseus to study the evolution of their interface, to preserve key screens, and to improve the interface for mobile devices.
Category: Interface Design and Usability
The Old Bailey Datawarehousing Interface
The latest version of our Old Bailey Datawarehousing Interface is up. This was the Digging Into Data project that got TAPoR, Zotero and Old Bailey working together. One of the things we built was an advanced visualization environment for the Old Bailey. This was programmed by John Simpson following ideas from Joerg Sanders. Milena Radzikowska did the interface design work and I wrote emails.
One feature we have added is the broaDHcast widget that allows projects like Criminal Intent to share announcements. This was inspired partly by the issues of keeping distributed projects like TAPoR, Zotero and Old Bailey informed.
Google Art Project
An article in the New York Times led me to the Google Art Project. This project doesn’t feel like a Google project, perhaps because it uses an off-black background and the interface is complex. The project brings together art work and virtual tours of many of the worlds important museums (but not all.0 You can browse by collection, artist (by first name), artworks, and user galleries. You can change the language of the interface (and it seems to change even when you don’t want it to in certain circumstances.) When viewing a gallery you can get a wall of paintings or a street view virtual tour of the gallery. Above you see the “Museum View” of a room in the Uffizi with a barrier around a Filippino Lippi that is being treated for a woodworm infestation! In the Museum View you can pan around and move up to paintings much as you would in Google Maps in Street View. On the left is a floor plan that you can also use.
This site reminds me of what was one of the best multimedia CD-ROMs ever, the Musee d’Orsay: Virtual Visit. This used QuickTime VR to provide a virtual tour. It had the sliding walls of art. It also had special guides and some nice comparison tools that let you get a sense of the size of a work of art. The Google Art Project feels loosely copied from this right down to the colour scheme. It will be interesting to see if the Google Art Project subsumes individual museum sites or consortia like the Art Museum Image Consortium (Amico.)
I find it interesting how Google is developing specialized interfaces for more and more domains. The other day I was Googling for movies in Edmonton and found myself on a movies – Google Search page that arranges information conveniently. The standard search interface is adapting.
New Variorum Shakespeare Digital Challenge
The MLA has issued a New Variorum Shakespeare Digital Challenge. They are looking for original and innovative ways of “representing, and exploring this data.” You can download the XML files and schema from Github here to experiment with. Submissions are due by Friday, 31st of August, 2012. The winner of the challenge will get $500.
How Star Trek artists imagined the iPad… 23 years ago

How Star Trek artists imagined the iPad… 23 years ago is an article in Ars Technica about the design of the iconic Star Trek interfaces from those of PADDs (Personal Access Display Devices) to the touch screens used on the bridge. It turns out that one of the reasons for the flat touch screen interfaces was that they were cheap (compared to panels with lots of switches as contemporary spacecraft had.)
What could be simpler to make than a flat surface with no knobs, buttons, switches, or other details? Okuda designed a user interface dominated large type and sweeping, curved rectangles. The style was first employed in Star Trek IV: The Voyage Home for the Enterprise-A, and came to be referred to as “okudagrams.” The graphics could be created on transparent colored sheets very cheaply, though as ST:TNG progressed, control panels increasingly used video panels or added post-production animations.
The Sketchbook of Susan Kare, the Artist Who Gave Computing a Human Face
Steve Silberman has writing a great story about The Sketchbook of Susan Kare, the Artist Who Gave Computing a Human Face. Susan Kare was the artist who was hired to design fonts and icons for the Mac. She designed the now “iconic” icons in a graph paper sketchbook. The story was occaisioned by the publication of a book titled, Susan Kare Icons which shows some of the icons she has created over the years. (She also has prints of some of the more famous icons like the Mac with a happy face.
Issuu – You Publish

Thanks to Sharla I came across Issuu a site for publishing online magazine like documents. You get an account, you upload documents, and they create an interactive page-flipping e-publication out of it. When you “Click to read” a publication an application takes over your screen to give you a reading environment. They seem to have a lot of publications made available this way.
Infomous Clouds
I was on The Atlantic site and noticed a neat visualization badge by Infomous. It is a variant on the usual word cloud that draws lines between related words and puts simple cloud circles around related words. As you can see it doesn’t always get the clouds right. On the left you have Japan connected to protesters and protesters connected to Syria. There is not, however, any connection between Japan and Syria except that protests are happening in both.
If you get an account Infomous lets you make your own clouds.
Update: Pablo Funes from Icosystem Corp sends this email comment on the post:
We use Mark Newman’s algorithm for network communities to identify clusters of news. In your example, Japan and Syria are both connected to “protesters” and therefore share the same cluster even though there are no news articles that bear on both Japan and Syria (so there is no direct connection between both terms). One could argue, with this example at least, that there is a worldwide series of events that have been unfolding over the last few months, with public protests as the visible common feature (Tunisia, Egypt, Libya, and so on) which makes the connection “countries where protests are happening” a relevant one. And yet, it is true that sometimes the connection is not relevant at all, as it happens when generic words, such as “video” or “said” for example, are shared across news stories.
Our Appinions-based clouds rely on sophisticated semantic analysis provided by Appinions.com (see http://www.infomous.com/site/events/JapanNuclear/). Here, topics are connected because they are discussed by the same web user in the same posting. We use the same algorithm to identify clusters in this network. You can turn off clustering by unchecking “groups” on the bottom toolbar.
NFB: Out My Window
Joyce pointed me to a National Film Board (NFB) interactive work, Out My Window: Interactive Views from the Global Highrise. The work, directed by Katerina Cizek documents the lives of people in apartments through their apartments. For each apartment there is a 360 degree view that you can pan around (sort of like QuickTime VR.) Certain things can be clicked on to hear and see short documentaries with the voice of the dweller. These delicate stories are very effective at giving us a view of apartment life around the world.
SnapDragonAR from Future Stories
Mo pointed me to an announcement that SnapDragonAR has been officially released. This comes from work at the future cinema lab at York University. SnapDragonAR is a simple way to get augmented reality – you have a deck of cars with glyphs on them that the computer can recognize and replace with media clips (video shorts or images.) The glyphs can be put on things or you can just use the cards. The sofware then takes the video from a webcam and replaces the glyphs with the media objects you want and projects the resulting “augmented” video onto the screen (or projector.) It is neat and works with any current Mac.


