50th Anniversary of the Internet

Page from notebook documenting connection on the 29th, Oct. 1969. From UCLA special collections via this article

50 years ago on October 29th, 1969 was when the first two nodes of the ARPANET are supposed to have connected. There are, of course, all sorts of caveats, but it seems to have been one of the first times someone remote log in from one location to another on what became the internet. Gizmodo has an interview with Bradley Fidler on the history that is worth reading.

Remote access was one of the reasons the internet was funded by the US government. They didn’t want to give everyone their own computer. Instead the internet (ARPANET) would let people use the computers of others remotely (See Hafner & Lyon 1996).

Interestingly, I also just read a story that the internet (or at least North America, has just run out of IP addresses. The IPv4 addresses have been exhausted and not everyone has switched to IPv6 that has many more available addresses. I blame the Internet of Things (IoT) for assigning addresses to every “smart” object.

Hafner, K., & Lyon, M. (1996). Where Wizards Stay Up Late: The Origins of the Internet. New York: Simon & Shuster.

HyperCard at the Internet Archive

Screen Shot of Internet Archive HyperCard Collection

The Internet Archive is now collecting HyperCard Stacks and has an emulator so they can be run in the browser. If you have old ones to contribute you can upload them to hypercardonline.tk (which has a nerdy HyperCard like interface.)

Like many, I learned to program multimedia in HyperCard. I even ended up teaching it to faculty and teachers at the University of Toronto. It was a great starting development environment with a mix of graphical tools, hypertext tools and a verbose programming language. It’s only (and major) flaw was that it wasn’t designed to create networked information. HyperCard Stacks has to be passed around on disks. The web made possible a networked hypertext environment that solved the distribution problems of the 1980s. One wonders why Apple (or someone else) doesn’t bring it back in an updated and networked form. I guess that is what the Internet Archive is doing.

For more on the history of HyperCard see the Ars Technica article by Matthew Lasar, 30-plus years of HyperCard, the missing link to the Web.

What is cool is that artists are using HyperCard to make art like Formality* discussed in the previous post.

Rights Statements

At the SpokenWeb symposium at SFI I learned about a web site RightsStatements.org. This site provides example rights statements to use and put on the web. For example In Copyright – Rights-Holder(s) Unlocatable or Unidentifiable. These often use American language rather than Canadian language, but they are a useful resource.

Another, better known source for rights statement is Creative Commons, but it is more for creators than for cultural heritage online.

JSTOR Text Analyzer

JSTOR, and some other publishers of electronic research, have started building text analysis tools into their publishing tools. I came across this at the end of a JSTOR article where there was a link to “Get more results on Text Analyzer” which leads to a beta of the JSTOR labs Text Analyzer environment.

JSTOR Labs Text Analyzer

This analyzer environment provides simple an analytical tools for surveying an issue of a journal or article. The emphasis is on extracting keywords and entities so that one can figure out if an article or journal is useful. One can use this to find other similar things.

Results of Text Analyzer

What intrigues me is this embedding of tools into reading environments which is different from the standard separate data and tools model. I wonder how we could instrument Voyant so that it could be more easily embedded in other environments.

Peter Robinson, “Textual Communities: A Platform for Collaborative Scholarship on Manuscript Heritages”

Peter Robinson gave a talk on “Textual Communities: A Platform for Collaborative Scholarship on Manuscript Heritages” as part of the Singhmar Guest Speaker Program | Faculty of Arts.

He started by talking about whether textual traditions had any relationship to the material world. How do texts relate to each other?

Today stemata as visualizations are models that go beyond the manuscripts themselves to propose evolutionary hypotheses in visual form.

He then showed what he is doing with the Canterbury Tales Project and then talked about the challenges adapting the time-consuming transcription process to other manuscripts. There are lots of different transcription systems, but few that handle collation. There is also the problem of costs and involving a distributed network of people.

He then defined text:

A text is an act of (human) communication that is inscribed in a document.

I wondered how he would deal with Allen Renear’s argument that there are Real Abstract Objects which, like Platonic Forms are real, but have no material instance. When we talk, for example, of “hamlet” we aren’t talking about a particular instance, but an abstract object. Likewise with things like “justice”, “history,” and “love.” Peter responded that the work doesn’t exist except as its instances.

He also mentioned that this is why stand-off markup doesn’t work because texts aren’t a set of linear objects. It is better to represent it as a tree of leaves.

So, he launched Textual Communities – https://textualcommunities.org/

This is a distributed editing system that also has collation.

Burrows and Antonia Archives: Centre For 21st Century Humanities

What happens to old digital humanities projects? Most vanish without a trace. Some get archived like the work of John Burrows and others at the Centre For Literary And Linguistic Computing (CLLC). Dr. Alexis Antonia kept an archive of CLLC materials which is now available from the Centre For 21st Century Humanities.