Martin Holmes uploaded a neat animation of a visualization of The TEI Council at Work that shows version editing of the TEI documents.
The animation is generated by gource – software version control visualization.
Martin Holmes uploaded a neat animation of a visualization of The TEI Council at Work that shows version editing of the TEI documents.
The animation is generated by gource – software version control visualization.
I came across a 1957 article by an IBM scientist, P. Tasman on the methods used in Roberto Busa’s Index Thomisticus project, with the title Literary Data Processing (IBM Journal of Research and Development, 1(3): 249-256.) The article, which is in the third issue of the IBM Journal of Research and Development, has an illustration of how they used punch cards for this project.
While the reproduction is poor, you can read the things encoded on the card for each word:
At the end Tasman speculates on how these methods developed on the project could be used in other areas:
Apart from literary analysis, it appears that other areas of documentation such as legal, chemical, medical, scientific, and engineering information are now susceptible to the methods evolved. It is evident, of course, that the transcription of the documents in these other fields necessitates special sets of ground rules and codes in order to provide for information retrieval, and the results will depend entirely upon the degree and refinement of coding and the variety of cross referencing desired.
The indexing and coding techniques developed by this method offer a comparatively fast method of literature searching, and it appears that the machine-searching application may initiate a new era of language engineering. It should certainly lead to improved and more sophisticated techniques for use in libraries, chemical documentation, and abstract preparation, as well as in literary analysis.
Busa’s project may have been more than just the first humanities computing project. It seems to be one of the first projects to use computers in handling textual information and a project that showed the possibilities for searching any sort of literature. I should note that in the issue after the one in which Tasman’s article appears you have an article by H. P. Luhn (developer of the KWIC) on A Statistical Approach to Mechnized Encoding and Searching of Literary Information. (IBM Journal of Research and Development 1(4): 309-317.) Luhn specifically mentions the Tasman article and the concording methods developed as being useful to the larger statistical text mining that he proposes. For IBM researchers Busa’s project was an important first experiment handling unstructured text.
I learned about the Tasman article in a journal paper deposited by Thomas Nelson Winter on Roberto Busa, S.J., and the Invention of the Machine-Generated Concordance. The paper gives an excellent account of Busa’s project and its significance to concording. Well worth the read!
From Humanist I just learned about Juxta Commons. This is a web version of the earlier downloadable Java tool. The new version still has the lovely interface that shows the differences between variants. The commons however, builds on the personal computer tool by being a place where collations can be kept. Others can find and explore your collations. You can search the commons and find collation projects.
Another interesting feature is that they have Google ads if you search the commons. The search is “powered by Google” so perhaps that comes with the service.
Susan pointed me to Pundit: A novel semantic web annotation tool. Pundit (which has a great domain name “thepund.it”) is an annotation tool that lets people create and share annotations on web materials. The annotations are triples that can be saved and linked into DBpedia and so on. I’m not sure I understand how it works entirely, but the demo is impressive. It could be the killer-app of semantic web technologies for the digital humanities.
From Slashdot I found this blog entry Ocracoke Island Journal: Nookd about how a Nook version of War and Peace had the word “kindle” replaced by “nook” as in “It was as if a light has been Nooked (kindled) in a carved and painted lantern…” It seems that the company that ported the Kindle version over to the Nook ran a search and replace on the word Kindle and replaced it with Nook.
I think this should be turned into a game. We should create an e-reader that plays with the text in various ways. We could adapt some of Steve Ramsay’s algorithmic ideas (reversing lines of poetry). Readers could score points by clicking on the words they think were replaced and guessing the correct one.
I have been working for a while on archiving the Globalization Compendium which I worked on. Yesterday I got it archived in two Institutional Repositories:
In both cases there is a Zip of a BagIt bag with the XML files, code and other documentation from the site. My first major deposit.
The MLA has issued a New Variorum Shakespeare Digital Challenge. They are looking for original and innovative ways of “representing, and exploring this data.” You can download the XML files and schema from Github here to experiment with. Submissions are due by Friday, 31st of August, 2012. The winner of the challenge will get $500.
I am at the Canadian Writing Research Collaboratory (CWRC) launch. CWRC is building a collaborative editing environment that will allow editorial projects to manage the editing of electronic scholarly editions. Among other things CWRC is developing an online XML editor, a editorial workflow management tools, and integrated repository.
The keynote speakers for the event include Shawna Lemay and Aritha Van Herk.
Dan Cohen has written a good summary of the latest fuss over electronic books, The Fight Over the Future of Digital Books. He explains the latest suit by the Authors Guild against the HathiTrust. This suit is the companion to the suit by the Authors Guild of Google that has still not been resolved.
Stan Ruecker gave a great talk today about Visualations in Time for the Humanities Computing Research Colloquium. He is leading a SSHRC funded project that builds on Drucker and Noviskie’s work on Temporal Modelling. (I should mention that I am on the project.) Stan started by talking about all the challenges to the linear visualization of time that you see in tools like Simile. They include:
Stan then showed a number of visual designs for these different ways of thinking about time. Some looked like rubber sheets, some like frameworks of cubes with things in them, and some like water droplets. Many of these avoided the “line” in the visualization of time.