Toogle and Woogle

Woogle – Words in pictures is a art project that takes a phrase and builds it from Google retrieved images.

Toogle Image Search takes a word, finds an image from Google and then converts it to a text version of the image/word where the word is repeated in different colours to make up the image.

Both are neat art toys that nicely play on Google by C6.org and Gu Jian. I’m not clear as to who C6 is – an art collective in the UK? – but they have a number of clean and irreverant projects. Thanks to Robert for pointing this out to me.

DARPA Global Autonomous Language Exploitation

DARPA seeks strong, responsive proposals from well-qualified sources for a new research and development program called GALE (Global Autonomous Language Exploitation) with the goal of eliminating the need for linguists and analysts and automatically providing relevant, distilled actionable information to military command and personnel in a timely fashion.

Global Autonomous Language Exploitation (GALE) is an unbelievably ambitious DARPA project from the same office that brought us the ARPANET (Information Processing Technology Office.) Imagine if they succeed? Thanks to Greg Crane for pointing this out.

Update – the DARPA Information Processing Technology Office page on GALE is here. Under the GALE Proposer Pamphlet (BAA 05-28) there is a description of the types of discourse that should be processed and the desired results.

Engines must be able to process naturally-occurring speech and text of all the following types:

  • Broadcast news (radio, television)
  • Talk shows (studio, call-in)
  • Newswire
  • Newsgroups
  • Weblogs
  • Telephone conversations

. . .

DARPA’s desired end result includes

  • A transcription engine that produces English transcripts with 95% accuracy
  • A translation engine producing English text with 95% accuracy
  • A distillation engine able to fill knowledge bases with key facts and to deliver useful information as proficiently as humans can.

    Face of Text: Streaming Video and Podcast

    We have added to the The Face of Text web site a section on Media – from there you can launch a Quicktime application that lets you see streaming video of selected talks at the conference with synchronized slides and text. The application was developed with LiveStage Pro – an interesting authoring environment for Quicktime applications. You can also hear podcasts/MP3 audio of selected talks.
    The streaming media was developed by Zack Melnick as a Multimedia senior thesis project. Drew Paulin has been updating the web site.