Google Art Project

An article in the New York Times led me to the Google Art Project. This project doesn’t feel like a Google project, perhaps because it uses an off-black background and the interface is complex. The project brings together art work and virtual tours of many of the worlds important museums (but not all.0 You can browse by collection, artist (by first name), artworks, and user galleries. You can change the language of the interface (and it seems to change even when you don’t want it to in certain circumstances.) When viewing a gallery you can get a wall of paintings or a street view virtual tour of the gallery. Above you see the “Museum View” of a room in the Uffizi with a barrier around a Filippino Lippi that is being treated for a woodworm infestation! In the Museum View you can pan around and move up to paintings much as you would in Google Maps in Street View. On the left is a floor plan that you can also use.
This site reminds me of what was one of the best multimedia CD-ROMs ever, the Musee d’Orsay: Virtual Visit. This used QuickTime VR to provide a virtual tour. It had the sliding walls of art. It also had special guides and some nice comparison tools that let you get a sense of the size of a work of art. The Google Art Project feels loosely copied from this right down to the colour scheme. It will be interesting to see if the Google Art Project subsumes individual museum sites or consortia like the Art Museum Image Consortium (Amico.)

I find it interesting how Google is developing specialized interfaces for more and more domains. The other day I was Googling for movies in Edmonton and found myself on a movies – Google Search page that arranges information conveniently. The standard search interface is adapting.

How Star Trek artists imagined the iPad… 23 years ago

How Star Trek artists imagined the iPad… 23 years ago is an article in Ars Technica about the design of the iconic Star Trek interfaces from those of PADDs (Personal Access Display Devices) to the touch screens used on the bridge. It turns out that one of the reasons for the flat touch screen interfaces was that they were cheap (compared to panels with lots of switches as contemporary spacecraft had.)

What could be simpler to make than a flat surface with no knobs, buttons, switches, or other details? Okuda designed a user interface dominated large type and sweeping, curved rectangles. The style was first employed in Star Trek IV: The Voyage Home for the Enterprise-A, and came to be referred to as “okudagrams.” The graphics could be created on transparent colored sheets very cheaply, though as ST:TNG progressed, control panels increasingly used video panels or added post-production animations.

The Sketchbook of Susan Kare, the Artist Who Gave Computing a Human Face

Steve Silberman has writing a great story about The Sketchbook of Susan Kare, the Artist Who Gave Computing a Human Face. Susan Kare was the artist who was hired to design fonts and icons for the Mac. She designed the now “iconic” icons in a graph paper sketchbook. The story was occaisioned by the publication of a book titled, Susan Kare Icons which shows some of the icons she has created over the years. (She also has prints of some of the more famous icons like the Mac with a happy face.

Issuu – You Publish

Thanks to Sharla I came across Issuu a site for publishing online magazine like documents. You get an account, you upload documents, and they create an interactive page-flipping e-publication out of it. When you “Click to read” a publication an application takes over your screen to give you a reading environment. They seem to have a lot of publications made available this way.

Infomous Clouds

I was on The Atlantic site and noticed a neat visualization badge by Infomous. It is a variant on the usual word cloud that draws lines between related words and puts simple cloud circles around related words. As you can see it doesn’t always get the clouds right. On the left you have Japan connected to protesters and protesters connected to Syria. There is not, however, any connection between Japan and Syria except that protests are happening in both.

If you get an account Infomous lets you make your own clouds.


Update: Pablo Funes from Icosystem Corp sends this email comment on the post:

We use Mark Newman’s algorithm for network communities to identify clusters of news. In your example, Japan and Syria are both connected to “protesters” and therefore share the same cluster even though there are no news articles that bear on both Japan and Syria (so there is no direct connection between both terms). One could argue, with this example at least, that there is a worldwide series of events that have been unfolding over the last few months, with public protests as the visible common feature (Tunisia, Egypt, Libya, and so on) which makes the connection “countries where protests are happening” a relevant one. And yet, it is true that sometimes the connection is not relevant at all, as it happens when generic words, such as “video” or “said” for example, are shared across news stories.

Our Appinions-based clouds rely on sophisticated semantic analysis provided by Appinions.com (see http://www.infomous.com/site/events/JapanNuclear/). Here, topics are connected because they are discussed by the same web user in the same posting. We use the same algorithm to identify clusters in this network. You can turn off clustering by unchecking “groups” on the bottom toolbar.

NFB: Out My Window

Joyce pointed me to a National Film Board (NFB) interactive work, Out My Window: Interactive Views from the Global Highrise. The work, directed by Katerina Cizek documents the lives of people in apartments through their apartments. For each apartment there is a 360 degree view that you can pan around (sort of like QuickTime VR.) Certain things can be clicked on to hear and see short documentaries with the voice of the dweller. These delicate stories are very effective at giving us a view of apartment life around the world.

SnapDragonAR from Future Stories

Mo pointed me to an announcement that SnapDragonAR has been officially released. This comes from work at the future cinema lab at York University. SnapDragonAR is a simple way to get augmented reality – you have a deck of cars with glyphs on them that the computer can recognize and replace with media clips (video shorts or images.) The glyphs can be put on things or you can just use the cards. The sofware then takes the video from a webcam and replaces the glyphs with the media objects you want and projects the resulting “augmented” video onto the screen (or projector.) It is neat and works with any current Mac.

Save The Words: Adopt a Word

Save The Words is a neat social project set up by the Oxford University Press where you can “adopt” a rarely used word like “egrote” (to feign an illness). If you adopt a word then the idea is that you will use it in various ways. The site suggests you could name a pet with the word, get a tatoo, walk around with a signboard and so on.

Old words, wise words, hard-working words. Words that once led meaningful lives but now lie unused, unloved and unwanted.

You can change all that. Help save the worlds.

The interface to the project is also worth noting. The Flash interface has a wall of words in different fonts and on different surfaces as if photographs of words in contexts. A faux version of our Dictionary of Words in the Wild.

Glass

Thanks to Erik I have discovered an interesting web annotation feature called Glass. At the moment they are in beta and you have to get an invitation code to get an account, but they aren’t that hard to get.

Glass lets you add glass slides to web pages that you can invite other Glass users to see. These slides can hold conversations about a web site. You could use it to discuss an interface with a graphic designer. There might be educational uses too.

The interface of Glass is clean and it seems to nicely meet a need. Now … can we make a game with it? Can we do with it what PMOG (now called the Nethernet) was doing?