The last week and a bit I have been in Kyoto to give a talk at a conference on the “Possibilities in Digital Humanities” which was organized by Professor Kozaburo Hachimura and sponsored by the Information Processing Society of Japan and by the Ritsumeikan University Digital Humanities Center for Japanese Arts and Culture.
While the talks were in Japanese I was able to follow most of the sessions with the help of Mistuyuki Inaba and Keiko Susuki. I was impressed by the quality of the research and the involvement of new scholars. There seemed to be a much higher participation of postdoctoral fellows and graduate students than at similar conferences in Canada which bodes well for digital humanities in Japan.
A number of the talks I went to dealt with Intangible Cultural Heritage (ICH), which is UNESCO’s term for living heritage that doesn’t have a fixed form like oral traditions, performing arts, rituals and traditional crafts. (See previous post.) ICH is important to Japan and Kyoto with traditions like the tea ceremony, Noh-play and Bunraku puppetry. Because it is “intangible” ICH is difficult to represent digitally which is why it is the subject of innovative research in Japan. For example, at the Hachimura Laboratory they are using body motion capture to recreate Noh-play performance and other forms of dance.
Listening to presentations and talking with Japanese researchers it became clear to me how text-centric we are in the West. Projects in Japan that are creating databases of manuscripts have to deal with problems of calligraphy where particular characters are rendered differently by the artist. In many “texts” of historical interest the design of the page and interaction of the characters with other design elements carries meaning. Creating a digital representation is not as simple as transcribing the characters and adding markup (when was it ever that simple?) This means that Japanese researchers are still struggling to develop standard ways of encoding texts. By this I don’t mean standards in the sense of the TEI, but standards in the deeper sense of expectations as to what is represented and how connect images to transcriptions.
I can’t help wondering if in the West printing had the effect of separating book/page design from authorial intention so that it was much easier for us when we started “pouring old wine into new bottles”. Print technology normalized the character set we deal with so that authors don’t usually play with meaningful variant glyphs and the print economy divided the responsibility for the text (which is the author’s responsibility) from responsibility for the design of the book (which is the publisher’s and the book designer’s.) This made the transition to electronic text much easier since we already had a tradition of treating as important for text only the sequence of characters from the author. We were used to change in design (font and page) from edition to edition and have developed a print culture around this. Authors and readers dont’ expect design to do more than be transparent, with certain exceptions like artists’ books.
After the conference I attended a session where postdoctoral fellows at the Digital Humanities Center presented their research (in English.) Shinya Saito presented on “Development of Schematic Expressions and Knowledge Management” and the Kachina Cube project he is working on with M. Inaba. The Kachina Cube is an interesting 3-dimensional way of visualizing information. Maps (real or conceptual) provide the base X and Z dimensions and time the Y axis. You then see items in space floating over the map.
Takaaki Okamoto presented on “Characters and Image in Japanese Historical Documents – Image Database based on Text-Image Linking” which dealt with the problems of simply transcribing historical documents. He showed examples of handwritten characters that vary from the norm. He is developing an image annotation system for adding notes to page images. He showed a neat system for searching the transcribed text and seeing the page images with superimposed the normalized characters.
Ryoko Matsuba presented on “Kabuki and Image Database”. She is working on a database of woodblock prints and studying the way they present dramatic moments in Kabuki theatre. She is asking questions about how images of dramatic moments might influence performance.
Kiyofumi Kusui presented on the “Construction of a Database of Japanese Literary Magazines Published in Japan-ruled Korea.” His database covers literary magazines from Korea in the 1920s and 30s that were in Japanese in order to study the broader literary history of Japan in the 10th century. The database will allow him to see how Korean culture influenced Japanese literature.