On Making in the Digital Humanities

On Making in the Digital Humanities fills a gap in our understanding of digital humanities projects and craft by exploring the processes of making as much as the products that arise from it. The volume draws focus to the interwoven layers of human and technological textures that constitute digital humanities scholarship.

On Making in the Digital Humanities is finally out from UCL Press. The book honours the work of John Bradley and those in the digital humanities who share their scholarship through projects. Stéfan Sinclair and I first started work on it years ago and were soon joined by Juliane Nyhan and later Alexandra Ortolja-Baird. It is a pleasure to see it finished.

I co-wrote the Introduction with Nyhan and wrote a final chapter on “If Voyant then Spyral: Remembering Stéfan Sinclair: A discourse on practice in the digital humanities.” Stéfan passed during the editing of this.

Fuck the Poetry Police: On the Index of Major Literary Prizes in the United States

The LARB has a nice essay by Dan Sinykin on how researchers have used data to track how poetry prizes are distributed unequally titled, Fuck the Poetry Police: On the Index of Major Literary Prizes in the United States. The essay talks about the creation of the Post45 Data Collective which provides peer review for post-1945 cultural datasets.

Sinykin talks about this as an “act as groundbreaking as the research itself” which seems a bit of an exaggeration. It is important that data is being reviewed and published, but it has been happening for a while in other fields. Nonetheless, this is a welcome initiative, especially if it gets attention like the LARB article. In 2013 the Tri-Council (of research agencies in Canada) called for a culture of research data stewardship. In 2015 I worked with Sonja Sapach and Catherine Middleton on a report on a Data Management Plan Recommendation for Social Science and Humanities Funding Agencies. This looks more at the front end of requiring plans from people submitting grant proposals that are asking for funding for data-driven projects, but this was so that data could be made available for future research.

Sinykin’s essay looks at the poetry publishing culture in the US and how white it is. He shows how data can be used to study inequalities. We also need to ask about the privilege of English poetry and that of culture from the Global North. Not to mention research and research infrastructure.

Unitron Mac 512: A Contraband Mac 512K from Brazil

From a paper on postcolonial computing I learned about the Unitron Mac 512: A Contraband Mac 512K from Brazil. For a while Brazil didn’t allow the importation of computers (so as to kickstart their own computer industry.) Unitron decided to reverse engineer the Mac 512K, but Apple put pressure on Brazil and the project was closed down. At least 500 machines were built and I guess some are still in circulation.

The article is Philip, K., et al. (2010). “Postcolonial Computing: A Tactical Survey.” Science Technology Human Values. 37(1).

Though Apple had no intellectual property protection for the Macintosh in Brazil, the American corporation was able to pressure government and other economic actors within Brazil to reframe Unitron’s activities, once seen as nationalist and anti-colonial, as immoral piracy.

Social Sciences & Humanities Open Marketplace

Discover new resources for your research in Social Sciences and Humanities: tools, services, training materials and datasets, contextualised.

I’ve been experimenting with the Social Sciences & Humanities Open Marketplace. The Marketplace was developed by three European Research Infrastructures, Dariah-EU, Clarin, and CESSDA. I’m proud to say that TAPoR contributed data to the Marketplace. It is great to have such a directory service for finding things!

Replication, Repetition, or Revivification

A short essay I wrote with Stéfan Sinclair on “Recapitulation, Replication, Reanalysis, Repetition, or Revivification” is now up in preprint form. The essay is part of a longer work on “Anatomy of tools: A closer look at ‘textual DH’ methodologies.” The longer work is a set of interventions looking at text tools. These came out of a ADHO SIG-DLS (Digital Literary Studies) workshop that took place in Utrecht in July 2019.

Our intervention at the workshop had the original title “Zombies as Tools: Revivification in Computer Assisted Interpretation” and concentrated on practices of exploring old tools – a sort of revivification or bringing back to life of zombie tools.

The full paper should be published soon by DHQ.

Digital humanities – How data analysis can enrich the liberal arts

But despite data science’s exciting possibilities, plenty of other academics object to it

The Economist has a nice Christmas Special on the Digital humanities – How data analysis can enrich the liberal arts. The article tells a bit of our history (starting with Busa, of course) and gives examples of new work like that of Ted Underwood. The note criticism about how DH may be sucking up all the money or corrupting the humanities, but they also point out how little DH gets from the NEH pot (some $60m out of $16bn) which is hardly evidence of a take over. The truth is, as they note, that the humanities are under attack again and the digital humanities don’t make much of a difference either way. The neighboring fields that I see students moving to are media arts, communication studies and specializations like criminology. Those are the threats, but also sanctuaries for the humanities.

SHAPE

SHAPE is a new collective name for those subjects that help us understand ourselves, others and the human world around us. They provide us with the methods and forms of expression we need to build better, deeper, more colourful and more valuable lives for all.

From an Australian speaker at the INKE conference I learned about SHAPE or Social Sciences, Humanities & The Arts For People & The Economy. This is an initiative of the London School of Economics, the British Academy and the Arts Council of England. It is trying to complement the attention given to STEM fields. I like how they use the word shape in various assets as in:

The shape of then.

The shape of now.

The shape of if.

The shape of when.

You can read more in this Guardian story on University and Arts Council in drive to re-brand ‘soft’ academic subjects.

A Digital Project Handbook

A peer-reviewed, open resource filling the gap between platform-specific tutorials and disciplinary discourse in digital humanities.

From a list I am on I learned about Visualizing Objects, Places, and Spaces: A Digital Project Handbook. This is a highly modular text book that covers a lot of the basics about project management in the digital humanities. They have a call now for “case studies (research projects) and assignments that showcase archival, spatial, narrative, dimensional, and/or temporal approaches to digital pedagogy and scholarship.” The handbook is edited by Beth Fischer (Postdoctoral Fellow in Digital Humanities at the Williams College Museum of Art) and Hannah Jacobs (Digital Humanities Specialist, Wired! Lab, Duke University), but parts are authored by all sorts of people.

What I like about it is the way they have split up the modules and organized things by the type of project. They also have deadlines which seem to be for new iterations of materials and for completion of different parts. This could prove to be a great resource for teaching project management.

Guido Milanese: Filologia, letteratura, computer

Cover of the book "Filologia, Letteratura, Computer"
Philology, Literature, Computer: Ideas and instruments for humanistic informatics

Un manuale ampio ed esauriente che illustra tra teoria e prassi il tema dell’informatica umanistica per l’insegnamento e l’apprendimento universitario.

The publisher (Vita e Pensiero) kindly sent me a copy of Guido Milanese’s Filologia, letteratura, computer (Philology, Literature, Computer), an introduction to thinking about and thinking through the computer and texts. The book is designed to work as a text book that introduces students to the ideas and to key technologies, and then provides short guides to further ideas and readings.

The book focuses, as the title suggests, almost exclusively on digital filology or the computational study of texts. At the end Milanese has a short section on other media, but he is has chosen, rightly I think, to focus on set of technologies in depth rather than try a broad overview. In this he draws on an Italian tradition that goes back to Father Busa, but more importantly includes Tito Orlandi (who wrote the preface) and Numerico, Fiormonte, and Tomasi’s L’umanista digitale (this has been translated into English- see The digital humanist).

Milanese starts with the principle from Giambattista Vico that knowledge is made (verum ipsum factum.) Milanese believes that “reflection on the foundations identifies instruments and operations, and working with instruments and methods leads redefining the reflection on foundations.” (p. 9 – my rather free translation) This is virtuous circle in the digital humanities of theorizing and praxis where either one alone would be barren. Thus the book is not simply a list of tools and techniques one should know, but a series of reflections on humanistic knowledge and how that can be implemented in tools/techniques which in turn may challenge our ideas. This is what Stéfan Sinclair and I have been calling “thinking-through” where thinking through technology is both a way of learning about the thinking and about the technology.

An interesting example of this move from theory to praxis is in chapter 7 on “The Markup of Text.” (“La codifica del testo”) He moves from a discussion of adding metadata to the datafied raw text to Minsky’s idea of frames of knowledge as a way of understanding XML. I had never thought of Minsky’s ideas about articial intelligence contributing to the thinking behind XML, and perhaps Milanese is the first to do so, but it sort of works. The idea, as I understand it, goes something like this – human knowing, which Minsky wants to model for AI, brings frames of knowledge to any situation. If you enter a room that looks like a kitchen you have a frame of knowledge about how kitchens work that lets you infer things like “there must be a fridge somewhere which will have a snack for me.” Frames are Minsky’s way of trying to overcome the poverty of AI models based on collections of logical statements. It is a way of thinking about and actually representing the contextual or common sense knowledge that we bring to any situation such that we know a lot more than what is strictly in sight.

Frame systems are made up of frames and connections to other frames. The room frame connects hierarchically to the kitchen-as-a-type-of-room frame which connects to the fridge frame which then connects to the snack frame. The idea then is to find a way to represent frames of knowledge and their connections such that they can be used by AI systems. This is where Milanese slides over to XML as a hierarchical way of adding metadata to a text that enriches it with a frame of knowledge. I assume the frame (or Platonic form?) would be the DTD or Schema which then lets you do some limited forms of reasoning about an instance of an encoded text. The markup explicitly tells the computer something about the parts of the text like this (<author>Guido Milanese</author>) is the author.

The interesting thing is to refect on this application of Minsky’s theory. To begin, I wonder if it is historically true that the designers of XML (or its parent SGML) were thinking of Minsky’s frames. I doubt it, as SGML is descended from GML that predates Minsky’s 1974 Memo on “A Framework for Representing Knowledge.” That said, what I think Milanese is doing is using Minsky’s frames as a way of explaining what we do when modelling a phenomena like a text (and our knowledge of it.) Modelling is making explicit a particular frame of knowledge about a text. I know that certain blocks are paragraphs so I tag them as such. I also model in the sense of create a paradigmatic version of what my perspective on the text is. This would be the DTD or Schema which defines the parts and their potential relationships. Validating a marked up text would be a way of testing the instance against the model.

This nicely connects back to Vico’s knowing is making. We make digital knowledge not by objectively representing the world in digital form, but by creating frames or models for what can be digital known and then apply those frames to instances. It is a bit like object-oriented programming. You create classes that frame what can be represented about a type of object.

There is an attractive correspondence between the idea of knowledge as a hierarchy of frames and an XML representation of a text as a hierarchy of elements. There is a limit, however, to the move. Minsky was developing a theory of knowing such that knowledge could be artificially represented on a computer that could then do knowing (in the sense of complete AI tasks like image recognition.) Markup and marking up strike me as more limited activities of structuring. A paragraph tag doesn’t actually convey to the computer all that we know about paragraphs. It is just a label in a hierarchy of labels to which styles and processes can be attached. Perhaps the human modeller is thinking about texts in all their complexity, but they have to learn not to confuse what they know with what they can model for the computer. Perhaps a human reader of the XML can bring the frames of knowledge to reconstitute some of what the tagger meant, but the computer can’t.

Another way of thinking about this would be Searle’s Chinese room paradox. The XML is the bits of paper handed under the door in Chinese for the interpreter in the room. An appropriate use of XML will provoke the right operations to get something out (like a legible text on the screen) but won’t mean anything. Tagging a string with <paragraph> doesn’t make it a real paragraph in the fullness of what is known of paragraphs. It makes it a string of characters with associated metadata that may or may not be used by the computer.

Perhaps these limitations of computing is exactly what Milanese wants us to think about in modelling. Frames in the sense of picture frames are a device for limiting the view. For Minsky you can have many frames with which to make sense of any phenomena – each one is a different perspective that bears knowledge, sometimes contradictory. When modelling a text for the computer you have to decide what you want to represent and how to do it so that users can see the text through your frame. You aren’t helping the computer understand the text so much as representing your interpretation for other humans to use and, if they read the XML, re-interpret. This is making a knowing.

References

Milanese, G. (2020). Filologia, Letteratura, Computer: Idee e strumenti per l’informatica umanistica. Milan, Vita e Pensiero.

Minsky, M. (1974, June). A Framework for Representing Knowledge. MIT-AI Laboratory Memo 306. MIT.

Searle, J. R. (1980). “Minds, Brains and Programs.” Behavioral and Brain Sciences. 3:3. 417-457.