Guido Milanese: Filologia, letteratura, computer

Cover of the book "Filologia, Letteratura, Computer"
Philology, Literature, Computer: Ideas and instruments for humanistic informatics

Un manuale ampio ed esauriente che illustra tra teoria e prassi il tema dell’informatica umanistica per l’insegnamento e l’apprendimento universitario.

The publisher (Vita e Pensiero) kindly sent me a copy of Guido Milanese’s Filologia, letteratura, computer (Philology, Literature, Computer), an introduction to thinking about and thinking through the computer and texts. The book is designed to work as a text book that introduces students to the ideas and to key technologies, and then provides short guides to further ideas and readings.

The book focuses, as the title suggests, almost exclusively on digital filology or the computational study of texts. At the end Milanese has a short section on other media, but he is has chosen, rightly I think, to focus on set of technologies in depth rather than try a broad overview. In this he draws on an Italian tradition that goes back to Father Busa, but more importantly includes Tito Orlandi (who wrote the preface) and Numerico, Fiormonte, and Tomasi’s L’umanista digitale (this has been translated into English- see The digital humanist).

Milanese starts with the principle from Giambattista Vico that knowledge is made (verum ipsum factum.) Milanese believes that “reflection on the foundations identifies instruments and operations, and working with instruments and methods leads redefining the reflection on foundations.” (p. 9 – my rather free translation) This is virtuous circle in the digital humanities of theorizing and praxis where either one alone would be barren. Thus the book is not simply a list of tools and techniques one should know, but a series of reflections on humanistic knowledge and how that can be implemented in tools/techniques which in turn may challenge our ideas. This is what Stéfan Sinclair and I have been calling “thinking-through” where thinking through technology is both a way of learning about the thinking and about the technology.

An interesting example of this move from theory to praxis is in chapter 7 on “The Markup of Text.” (“La codifica del testo”) He moves from a discussion of adding metadata to the datafied raw text to Minsky’s idea of frames of knowledge as a way of understanding XML. I had never thought of Minsky’s ideas about articial intelligence contributing to the thinking behind XML, and perhaps Milanese is the first to do so, but it sort of works. The idea, as I understand it, goes something like this – human knowing, which Minsky wants to model for AI, brings frames of knowledge to any situation. If you enter a room that looks like a kitchen you have a frame of knowledge about how kitchens work that lets you infer things like “there must be a fridge somewhere which will have a snack for me.” Frames are Minsky’s way of trying to overcome the poverty of AI models based on collections of logical statements. It is a way of thinking about and actually representing the contextual or common sense knowledge that we bring to any situation such that we know a lot more than what is strictly in sight.

Frame systems are made up of frames and connections to other frames. The room frame connects hierarchically to the kitchen-as-a-type-of-room frame which connects to the fridge frame which then connects to the snack frame. The idea then is to find a way to represent frames of knowledge and their connections such that they can be used by AI systems. This is where Milanese slides over to XML as a hierarchical way of adding metadata to a text that enriches it with a frame of knowledge. I assume the frame (or Platonic form?) would be the DTD or Schema which then lets you do some limited forms of reasoning about an instance of an encoded text. The markup explicitly tells the computer something about the parts of the text like this (<author>Guido Milanese</author>) is the author.

The interesting thing is to refect on this application of Minsky’s theory. To begin, I wonder if it is historically true that the designers of XML (or its parent SGML) were thinking of Minsky’s frames. I doubt it, as SGML is descended from GML that predates Minsky’s 1974 Memo on “A Framework for Representing Knowledge.” That said, what I think Milanese is doing is using Minsky’s frames as a way of explaining what we do when modelling a phenomena like a text (and our knowledge of it.) Modelling is making explicit a particular frame of knowledge about a text. I know that certain blocks are paragraphs so I tag them as such. I also model in the sense of create a paradigmatic version of what my perspective on the text is. This would be the DTD or Schema which defines the parts and their potential relationships. Validating a marked up text would be a way of testing the instance against the model.

This nicely connects back to Vico’s knowing is making. We make digital knowledge not by objectively representing the world in digital form, but by creating frames or models for what can be digital known and then apply those frames to instances. It is a bit like object-oriented programming. You create classes that frame what can be represented about a type of object.

There is an attractive correspondence between the idea of knowledge as a hierarchy of frames and an XML representation of a text as a hierarchy of elements. There is a limit, however, to the move. Minsky was developing a theory of knowing such that knowledge could be artificially represented on a computer that could then do knowing (in the sense of complete AI tasks like image recognition.) Markup and marking up strike me as more limited activities of structuring. A paragraph tag doesn’t actually convey to the computer all that we know about paragraphs. It is just a label in a hierarchy of labels to which styles and processes can be attached. Perhaps the human modeller is thinking about texts in all their complexity, but they have to learn not to confuse what they know with what they can model for the computer. Perhaps a human reader of the XML can bring the frames of knowledge to reconstitute some of what the tagger meant, but the computer can’t.

Another way of thinking about this would be Searle’s Chinese room paradox. The XML is the bits of paper handed under the door in Chinese for the interpreter in the room. An appropriate use of XML will provoke the right operations to get something out (like a legible text on the screen) but won’t mean anything. Tagging a string with <paragraph> doesn’t make it a real paragraph in the fullness of what is known of paragraphs. It makes it a string of characters with associated metadata that may or may not be used by the computer.

Perhaps these limitations of computing is exactly what Milanese wants us to think about in modelling. Frames in the sense of picture frames are a device for limiting the view. For Minsky you can have many frames with which to make sense of any phenomena – each one is a different perspective that bears knowledge, sometimes contradictory. When modelling a text for the computer you have to decide what you want to represent and how to do it so that users can see the text through your frame. You aren’t helping the computer understand the text so much as representing your interpretation for other humans to use and, if they read the XML, re-interpret. This is making a knowing.

References

Milanese, G. (2020). Filologia, Letteratura, Computer: Idee e strumenti per l’informatica umanistica. Milan, Vita e Pensiero.

Minsky, M. (1974, June). A Framework for Representing Knowledge. MIT-AI Laboratory Memo 306. MIT.

Searle, J. R. (1980). “Minds, Brains and Programs.” Behavioral and Brain Sciences. 3:3. 417-457.

Conference: Artificial Intelligence for Information Accessibility

AI for Society and the Kule Institute for Advanced Research helped organize a conference on Artificial Intelligence for Information Accessibility (AI4IA) on September 28th, 2020. This conference was organized on the International Day for Universal Access to Information which is why the focus was on how AI can be important to access to information. An important partner in the conference was the UNESCO Information For All Programme (IFAP) Working Group on Information Accessibility (WGIA)

International Day for Universal Access to Information focused on the right to information in times of crisis and on the advantages of having constitutional, statutory and/or policy guarantees for public access to information to save lives, build trust and help the formulation of sustainable policies through and beyond the COVID-19 crisis. Speakers talked about how vital access to accurate information is in these pandemic times and the role artificial intelligence could play as we prepare for future crises. Tied to this was a discussion of the important role for international policy initiatives and shared regulation in ensuring that smaller countries, especially in the Global South, benefit from developments in AI. The worry is that some countries won’t have the digital literacy or cadre of experts to critically guide the introduction of AI.

The AI4S Associate Director, Geoffrey Rockwell, kept conference notes on the talks here,  Conference Notes on AI4AI 2020.

Ryan Cordell: Programmable Type: the Craft of Printing, the Craft of Code

A line of R code set in movable type

I want situate the kinds of programming typically practiced in digital humanities research and teaching in relation to practices more familiar to book historians and bibliographers, such as the work of compositors and printers working with moveable type.

Ryan Cordell sent me a link to a talk on  Programmable Type: the Craft of Printing, the Craft of Code. The talk looks at the “modes of thought and labor” of composing movable type and programming. He is careful to warns us about the simplistic story that has movable type and the computer as two information technologies that caused revolutions in how we think about knowledge. What is particularly interesting is how he weaves hands-on work into his course Technologies of Text. He asks students to not just read about printing, but to try doing it. Likewise for programming in R. There is a knowing that comes from doing something and attending to the labor of that doing. Replicating the making of texts gives students (and researchers) a sense of the materiality and contexts of media. It is a way of doing media archaeology.

In the essay, Cordell writes about the example of the visual poem “A Dude” and its many iterations composed with different type. I had blogged about “A Dude”, but hadn’t thought about how the poem would have been a way for the compositor to show of their craft much like a twitterbot might be a way for a programmer to show off theirs.

Cordell frames this discussion by considering the controversy around whether digital humanists should need to be able to code. He raises an interesting challenge – whether learning the craft of programming (or letterpress printing) might make it harder to view the craft critically. In committing time and labour to learning a craft does one get implicated or corrupted by the craft? Doesn’t one want end up valuing the craft simply because it is something one can now do, and to critique it would be to critique oneself.

 

Why Uber’s business model is doomed

Like other ridesharing companies, it made a big bet on an automated future that has failed to materialise, says Aaron Benanav, a researcher at Humboldt University

Aaron Benanav has an important opinion piece in The Guardian about Why Uber’s business model is doomed. Benanav argues that Uber and Lyft’s business model is to capture market share and then ditch the drivers they have employed for self-driving cars as they become reliable. In other words they are first disrupting the human taxi services so as to capitalize on driverless technology when it comes. Their current business is losing money as they feast on venture capital to get market share and if they can’t make the switch to driverless it is likely they go bankrupt.

This raises the question of whether we will see driverless technology good enough to oust the human drivers? I suspect that we will see it for certain geo-fenced zones where Uber and Lyft can pressure local governments to discipline the streets so as to be safe for driverless. In countries with chaotic and hard to accurately map streets (think medieval Italian towns) it may never work well enough.

All of this raises the deeper ethical issue of how driverless vehicles in particular and AI in general are being imagined and implemented. While there may be nothing unethical about driverless cars per se, there IS something unethical about a company deliberately bypassing government regulations, sucking up capital, driving out the small human taxi businesses, all in order to monopolize a market that they can then profit on by firing the drivers that got them there for driverless cars. Why is this the way AI is being commercialized rather than trying to create better public transit systems or better systems for helping people with disabilities? Who do we hold responsible for the decisions or lack of decisions that sees driverless AI technology implemented in a particularly brutal and illegal fashion. (See Benanav on the illegality of what Uber and Lyft are doing by forcing drivers to be self-employed contractors despite rulings to the contrary.)

It is this deeper set of issues around the imagination, implementation, and commercialization of AI that needs to be addressed. I imagine most developers won’t intentionally create unethical AIs, but many will create cool technologies that are commercialized by someone else in brutal and disruptive ways. Those commercializing and their financial backers (which are often all of us and our pension plans) will also feel no moral responsibility because we are just benefiting from (mostly) legal innovative businesses. Corporate social responsibility is a myth. At most corporate ethics is conceived of as a mix of public relations and legal constraints. Everything else is just fair game and the inevitable disruptions in the marketplace. Those who suffer are losers.

This then raises the issue of the ethics of anticipation. What is missing is imagination, anticipation and planning. If the corporate sector is rewarded for finding ways to use new technologies to game the system, then who is rewarded for planning for the disruption and, at a minimum, lessening the impact on the rest of us? Governments have planning units like city planning units, but in every city I’ve lived in these units are bypassed by real money from developers unless there is that rare thing – a citizen’s revolt. Look at our cities and their spread – despite all sorts of research and a history of spread, there is still very little discipline or planning to constrain the developers. In an age when government is seen as essentially untrustworthy planning departments start from a deficit of trust. Companies, entrepreneurs, innovation and yes, even disruption, are blessed with innocence as if, like children, they just do their thing and can’t be expected to anticipate the consequences or have to pick up after their play. We therefore wait for some disaster to remind everyone of the importance of planning and systems of resilience.

Now … how can teach this form of deeper ethics without sliding into political thought?

Automatic grading and how to game it

Edgenuity involves short answers graded by an algorithm, and students have already cracked it

The Verge has a story on how students are figuring out how to game automatic marking systems like Edgenuity. The story is titled, These students figured out their tests were graded by AI — and the easy way to cheat. The story describes a keyword salad approach where you just enter a list of words that the grader may be looking for. The grader doesn’t know whether what your wrote is legible or nonsense, it just looks for the right words. The students in turn get good as skimming the study materials for the keywords needed (or find lists shared by other students online.)

Perhaps we could build a tool called Edgenorance which you could feed the study materials to and it would generate the keyword list automatically. It could watch the lectures for you, do the speech recognition, then extract the relevant keywords based on the text of the question.

None of this should be surprising. Companies have been promoting algorithms that were probably word based for a while. The algorithm works if it is not understood and thus not gamed. Perhaps we will get AIs that can genuinely understand a short paragraph answer and assess it, but that will be close to an artificial general intelligence and such an AGI will change everything.

What coding really teaches children

You’ve seen movies where programmers pound out torrents of code? That is nothing like reality. Most of the time, coders don’t type at all; they sit and stare morosely at the screen, running their hands through their hair, trying to spot what they’ve done wrong. It can take hours, days, or even weeks. But once the bug is fixed and the program starts working again, the burst of pleasure has a narcotic effect.

Stéfan pointed me to a nice opinion piece about programming education in the Globe titled, Opinion: What coding really teaches children. Clive Thompson that teaching programming in elementary school will not necessarily teach math but it can teach kids about the digital world and teach them the persistence it takes to get complex things working. He also worries, as I do, about asking elementary teachers to learn enough coding to be able to teach it. This could be a recipe for alienating a lot of students who are taught by teachers who haven’t learned.

CSDH / SCHN 2020 was brilliant online

Today was the last day of the CSDH / SCHN 2020 online conference. You can see my conference notes here. The conference had to go online due to Covid-19 and the cancellation of Congress 2020. That said, the online conference web brilliantly. The Programme Committee, chaired by Kim Martin, deserve a lot of credit as do the folks at the U of Alberta Arts Resource Centre who provided technical support. Some of the things they did that

  • The schedule has a single track across 5 days rather than parallel tracks over 3 days. See the schedule.
  • There were only 3 and half hours of sessions a day (from 9:00am to 12:30 Western time) so you could get other things done. (There were also hangout sessions before and after.)
  • Papers (or prepared presentations) had to be put up the week before on Humanities Commons.
  • The live presentations during the conference were thus kept to 3 minutes or so, which allowed sessions to be shorter which allowed them to have a single track.
  • They had a chair and a respondent for each session which meant that there was a lot of discussion instead of long papers and no time for questions. In fact, the discussion seemed better than at on site conferences.
  • They used Eventbrite for registration, Zoom for the registrants-only parts of the conference, and Google Meet for the open parts.
  • They had hangout or informal sessions at the beginning and end of each day where more informal discussion could take place.

The nice thing about the conference was that they took advantage of the medium. As none of us had flown to London, Ontario, they were able to stretch the conference over 5 days, but not use up the entire day.

All told, I think they have shown that an online conference can work surprisingly well if properly planned and supported.

DARIAH Virtual Exchange Event

This morning at 7am I was up participating in a DARIAH VX (Virtual Exchange) on the subject of The Scholarly Primitives of Scholarly Meetings. This virtual seminar was set up when DARIAH’s f2f (face-2-face) meeting was postponed. The VX was to my mind a great example of an intentionally designed virtual event. Jennifer Edmunds and colleagues put together an event meant to be both about and an example of a virtual seminar.

One feature they used was to have us all split into smaller breakout rooms. I was in one on The Academic Footprint: Sustainable methods for knowledge exchange. I presented on Academic Footprint: Moving Ideas Not People which discussed our experience with the Around the World Econferences. I shared some of the advice from the Quick Guide I wrote on Organizing a Conference Online.

  • Recognize the status conferred by travel
  • Be explicit about blocking out the time to concentrate on the econference
  • Develop alternatives to informal networking
  • Gather locally or regionally
  • Don’t mimic F2F conferences (change the pace, timing, and presentation format)
  • Be intentional about objectives of conference – don’t try to do everything
  • Budget for management and technology support

For those interested we have a book coming out from Open Book Publishers with the title Right Research that collects essays on sustainable research. We have put up preprints of two of the essays that deal with econferences:

The organizers had the following concept and questions for our breakout group.

Session Concept: Academic travel is an expense not only to the institutions and grant budgets, but also to the environment. There have been moves towards open-access, virtual conferences and near carbon-neutral events. How can academics work towards creating a more sustainable environment for research activities?

Questions: (1) How can academics work towards creating a more sustainable environment for research activities? (2) What are the barriers or limitations to publishing in open-access journals and how can we overcome these? (3) What environmental waste does your research produce? Hundreds of pages of printed drafts? Jet fuel pollution from frequent travel? Electricity from powering huge servers of data?

The breakout discussion went very well. In fact I would have had more breakout discussion and less introduction, though that was good too.

Another neat feature they had was a short introduction (with a Prezi available) followed by an interview before us all. The interview format gave a liveliness to the proceeding.

Lastly, I was impressed by the supporting materials they had to allow the discussion to continue. This included the DARIAH Virtual Exchange Event – Exhibition Space for the Scholarly Primitives of Scholarly Meetings.

All told, Dr. Edmonds and DARIAH colleagues have put together a great exemplar both about and of a virtual seminar. Stay tuned for when they share more.

The reason Zoom calls drain your energy

Video chat is helping us stay employed and connected. But what makes it so tiring – and how can we reduce ‘Zoom fatigue’?

Many of us have suspected that videoconferencing is stressful. I tend to blame the stress of poor audio as my hearing isn’t what it used to be. His a story from the BBC on The reason Zoom calls drain your energy. There are a number of factors:

  • The newness of this way of interacting
  • The heightened focus needed to deal with missing non-verbal cues.
  • Heightened focus needed to deal with poor audio.
  • Need to moderate larger groups so people don’t try to talk at the same time
  • Audio delays change responsiveness
  • Stress and time around technical problems.
  • Silences don’t work the way they do in f2f. They can indicate malfunction.
  • Being on camera and having to be performative
  • Lack of separation of home and work
  • Lack of transition times between meetings (no time to even get up and meet your next appointment at the door)

I hadn’t thought of the role of silence in regular conversations and how we can’t depend on that rhetorically any longer. No dramatic silences any more.

How to Look and Sound Fabulous on a Webcam – School of Journalism – Ryerson University

Now that all of us are having to teach and meet over videoconferencing on our laptops, it is useful to get advice from the professionals. Chelsea sent me this link to Ryerson professor Gary Gould’s advince on How to Look and Sound Fabulous on a Webcam. The page covers practical things like lighting, positioning of the camera, backgrounds, framing and audio. I realize I need to rethink just having the laptop on my lap.