Archive for the ‘Markup and Text Representation’ Category

Social Digital Scholarly Editing

Tuesday, July 16th, 2013

On July 11th and 12th I was at a conference in Saskatoon on Social Digital Scholarly Editing. This conference was organized by Peter Robinson and colleagues at the University of Saskatchewan. I kept conference notes here.

I gave a paper on “Social Texts and Social Tools.” My paper argued for text analysis tools as a “reader” of editions. I took the extreme case of big data text mining and what scraping/mining tools want in a text and don’t want in a text. I took this extreme view to challenge the scholarly editing view that the more interpretation you put into an edition the better. Big data wants to automate the process of gathering and mining texts – big data wants “clean” texts that don’t have markup, annotations, metadata and other interventions that can’t be easily removed. The variety of markup in digital humanities projects makes it very hard to clean them.

The response was appreciative of the provocation, but (thankfully) not convinced that big data was the audience of scholarly editors.

Virtual Research Worlds: New Technology in the Humanities – YouTube

Thursday, February 7th, 2013

The folk at TextGrid have created a neat video about new technology in the humanities, Virtual Research Worlds: New Technology in the Humanities. The video shows the connection between archives and supercomputers (grid computing). At around 2:20 you will see a number of visualizations from Voyant that they have connected into TextGrid. I love the links tools spawning words before a bronze statue. Who is represented by the statue?


The TEI at Work Animation using gource

Thursday, January 24th, 2013


Martin Holmes uploaded a neat animation of a visualization of The TEI Council at Work that shows version editing of the TEI documents.

The animation is generated by gource – software version control visualization.

Tasman: Literary Data Processing

Wednesday, January 2nd, 2013

I came across a 1957 article by an IBM scientist, P. Tasman on the methods used in Roberto Busa’s Index Thomisticus project, with the title Literary Data Processing (IBM Journal of Research and Development, 1(3): 249-256.) The article, which is in the third issue of the IBM Journal of Research and Development, has an illustration of how they used punch cards for this project.

Image of Punch Card

While the reproduction is poor, you can read the things encoded on the card for each word:

  • Location in text
  • Special reference mark
  • Word
  • Number of word in text
  • First letter of preceding word
  • First letter of following word
  • Form card number
  • Entry card number

At the end Tasman speculates on how these methods developed on the project could be used in other areas:

Apart from literary analysis, it appears that other areas of documentation such as legal, chemical, medical, scientific, and engineering information are now susceptible to the methods evolved. It is evident, of course, that the transcription of the documents in these other fields necessitates special sets of ground rules and codes in order to provide for information retrieval, and the results will depend entirely upon the degree and refinement of coding and the variety of cross referencing desired.

The indexing and coding techniques developed by this method offer a comparatively fast method of literature searching, and it appears that the machine-searching application may initiate a new era of language engineering. It should certainly lead to improved and more sophisticated techniques for use in libraries, chemical documentation, and abstract preparation, as well as in literary analysis.

Busa’s project may have been more than just the first humanities computing project. It seems to be one of the first projects to use computers in handling textual information and a project that showed the possibilities for searching any sort of literature. I should note that in the issue after the one in which Tasman’s article appears you have an article by H. P. Luhn (developer of the KWIC) on A Statistical Approach to Mechnized Encoding and Searching of Literary Information. (IBM Journal of Research and Development 1(4): 309-317.) Luhn specifically mentions the Tasman article and the concording methods developed as being useful to the larger statistical text mining that he proposes. For IBM researchers Busa’s project was an important first experiment handling unstructured text.

I learned about the Tasman article in a journal paper deposited by Thomas Nelson Winter on Roberto Busa, S.J., and the Invention of the Machine-Generated Concordance. The paper gives an excellent account of Busa’s project and its significance to concording. Well worth the read!

Juxta Commons

Thursday, December 6th, 2012


From Humanist I just learned about Juxta Commons. This is a web version of the earlier downloadable Java tool. The new version still has the lovely interface that shows the differences between variants. The commons however, builds on the personal computer tool by being a place where collations can be kept. Others can find and explore your collations. You can search the commons and find collation projects.

Another interesting feature is that they have Google ads if you search the commons. The search is “powered by Google” so perhaps that comes with the service.

Pundit: A novel semantic web annotation tool

Thursday, July 12th, 2012

Susan pointed me to Pundit: A novel semantic web annotation tool. Pundit (which has a great domain name “”) is an annotation tool that lets people create and share annotations on web materials. The annotations are triples that can be saved and linked into DBpedia and so on. I’m not sure I understand how it works entirely, but the demo is impressive. It could be the killer-app of semantic web technologies for the digital humanities.

War and Peace gets Nookd

Saturday, June 2nd, 2012

From Slashdot I found this blog entry Ocracoke Island Journal: Nookd about how a Nook version of War and Peace had the word “kindle” replaced by “nook” as in “It was as if a light has been Nooked (kindled) in a carved and painted lantern…” It seems that the company that ported the Kindle version over to the Nook ran a search and replace on the word Kindle and replaced it with Nook.

I think this should be turned into a game. We should create an e-reader that plays with the text in various ways. We could adapt some of Steve Ramsay’s algorithmic ideas (reversing lines of poetry). Readers could score points by clicking on the words they think were replaced and guessing the correct one.

Globalization Compendium Archive

Thursday, April 19th, 2012

I have been working for a while on archiving the Globalization Compendium which I worked on. Yesterday I got it archived in two Institutional Repositories:

In both cases there is a Zip of a BagIt bag with the XML files, code and other documentation from the site. My first major deposit.

New Variorum Shakespeare Digital Challenge

Wednesday, April 11th, 2012

The MLA has issued a New Variorum Shakespeare Digital Challenge. They are looking for original and innovative ways of “representing, and exploring this data.” You can download the XML files and schema from Github here to experiment with. Submissions are due by Friday, 31st of August, 2012. The winner of the challenge will get $500.

Canadian Writing Research Collaboratory Launch

Thursday, September 29th, 2011


I am at the Canadian Writing Research Collaboratory (CWRC) launch. CWRC is building a collaborative editing environment that will allow editorial projects to manage the editing of electronic scholarly editions. Among other things CWRC is developing an online XML editor, a editorial workflow management tools, and integrated repository.

The keynote speakers for the event include Shawna Lemay and Aritha Van Herk.