Michael Sperberg-McQueen has an interesting colloquium paper that I just came across, The State of Computing in the Humanities: Making a Synthesizer Sound like an Oboe. There is an interesting section on “Document Geometries” where he describes different ways we represent texts on a computer from linear ways to typed hierarchies (like TEI.)
The entire TEI Guidelines can be summed up in one phrase, which we can imagine directed at producers of commercial text processing software: “Text is not simple.”
The TEI attempts to make text complex — or, more positively, the TEI enables the electronic representation of text to capture more complexity than is otherwise possible.
The TEI makes a few claims of a less vague nature, too.
- Many levels of text, many types of analysis or interpretation, may coexist in scholarship, and thus must be able to coexist in markup.
- Text varies with its type or genre; for major types the TEI provides distinct base tag sets.
- Text varies with the reader, the use to which it is put, the application software which must process it; the TEI provides a variety of additional tag sets for these.
- Text is linear, but not completely.
- Text is not always in English. It is appalling how many software developers forget this.
None of these claims will surprise any humanist, but some of them may come as a shock to many software developers.
This paper also got me thinking about the obviousness of structure. McQueen criticizes the “tagged linear” geometry (as in COCOA tagged text) thus,
The linear model captures the basic linearity of text; the tagged linear model adds the ability to model, within limits, some non-linear aspects of the text. But it misses another critical characteristic of text. Text has structure, and textual structures can contain other textual structures, which can contain still other structures within themselves. Since as readers we use textual structures to organize text and reduce its apparent complexity, it is a real loss if software is incapable of recognizing structural elements like chapters, sections, and paragraphs and insists instead on presenting text as an undifferentiated mass.
I can’t help asking if text really does have structure or if it is in the eye of the reader. Or perhaps, to be more accurate, if text has structure in the way we mean when we tag text using XML. If I were to naively talk about text structure I would actually be more likely to think of material things like pages, cover, tabs (in files), and so on. I might think of things that visually stand out like sidebars, paragraphs, indentations, coloured text, headings, or page numbers. None of these are really what gets encoded in “structural markup.” Rather what gets encoded is a logic or a structure in the structuralist sense of some underlying “real” structure.
Nonetheless, I think Sperberg-McQueen is onto something about how readers use textual structures and the need to therefore give them similar affordances. I would rephrase the issue as a matter of giving readers affordances with which to manage the complexity and amount of text. A book gives you things like a Table of Contents and Index. An electronic text (or electronic book) doesn’t have to give you exactly the same affordances, but we do need some ways of managing the excess complexity of text. In fact, we should be experimenting with what the computer can do well rather than reimplementing what paper does well. You can’t flip pages on the computer or find a coffee stain near something important, but you can scroll or search for a pattern. The TEI and logical encoding is about introducing computationally useful structure, not reproducing print structures. That’s why pages are so awkward in the TEI.
Update: The original link to the paper doesn’t work now, try this SSOAR Link – they have a PDF. (Thanks to Michael for pointing out the link rot.)