Facebook and American Democracy

The Atlantic has a good article on What Facebook Did to American Democracy by Alexis C. Madrigal (Oct. 12, 2017).  What is interesting is how we assumed that the net and social media could affect things and that is would naturally benefit the left.

The research showed that a small design change by Facebook could have electoral repercussions, especially with America’s electoral-college format in which a few hotly contested states have a disproportionate impact on the national outcome. And the pro-liberal effect it implied became enshrined as an axiom of how campaign staffers, reporters, and academics viewed social media.

The story of networked politics seems to be that of tactics being developed by the left and promoted as democratizing and inclusive that are then reused by the right. It is also the story of how Facebook decided to own news and fake news. They created filter bubbles so that none of knew what the other was thinking.

An important article. Read.

Inauguration of a new degree at Bologna

From Humanist I found out that the University of Bologna, one of the oldest universities in the world, is inaugurating a degree in Digital Humanities and Digital Knowledge (DHDK). The official web site for the two year MA is here. The programme will be in English and the Programme Director is Dr. Francesca Tomasi.

I love the idea of an inauguration of a programme. From the programme of the inauguration, it looks like a nice set of events/talks too.

Congratulations to Bologna!

Skinner on his Teaching Machine and programmed learning

From Teaching in a Digital Age by A. W. Bates I came across this 1954 video of Skinner explaining his Teaching Machine inspired by behaviourism. The machine runs a paper script, but it isn’t that different from the computer based drill training today. You get a question, you write your answer, and you get feedback.

Later we got machines that projected slides and hypertext systems. See Programmed Instruction and Teaching Machines.

Hey, Computer Scientists! Stop Hating on the Humanities

Wired Magazine has a nice essay on Hey, Computer Scientists! Stop Hating on the Humanities. The essay by a computer scientist argues that CS students need to study the ethical and social implications of what they build. It can’t be left to others because then it will be too late. Further, CS students should be scared a little:

Professors need to scare their students, to make them feel they’ve been given the skills not just to get rich but to wreck lives; they need to humble them, to make them realize that however good they might be at math, there’s still so much they don’t know.

Conference Report: DH 2017

This year I kept notes about the Digital Humanities 2017 conference at McGill. See DH 2017 Conference Report. My conference report also covers the New Scholars Symposium that took place before.

The NSS is supported by CHCI and centerNet. KIAS provided administrative support and the ACH provided coffee and snacks on the day of. We were lucky to have so many groups supporting the NSS which in turn supports new scholars to come to the conference and to articulate their issues in an unconference format.

DH 2017 itself was a rich feast of ideas. There was too much going on to summarize in a paragraph, but here are two highlights:

  • We had an opening keynote in French from Marin Dacos. He talked about the “Unexpected Reader” that one gets when publications are open.
  • We had a great closing keynote by Elizabeth Guffey on “The Upside-Down Politics of Access in the Digital Age” that asked about access for disabled people in the digital realm.

The participants of the New Scholars Symposium identified the following as topics to watch and think about:

  • AI and Machine Learning
  • Crowdsourcing
  • Building Twitterbots
  • Training Opportunities
  • Pedagogy
  • Digital Collections and Copyright
  • Diverse Voices

Secretary Clinton’s Email (Source: Wikileaks)

Thanks to Sarah I was led to a nice custom set of visualizations by Salahub and Oldford of Secretary Clinton’s Email (Source: Wikileaks)The visualizations are discussed in a paper titled Interactive Filter and Display of Hillary Clinton’s Emails: A Cautionary Tale of Metadata. Here is how the article concludes.

Finally, this is a cautionary tale. The collection and storage of metadata from any individual in our society should be of concern to all of us. While it is possible to discern patterns from several sources, it is also far too easy to construct a false narrative, particularly one that fits an already held point of view. As analysts, we fall prey to our cognitive biases. Interactive filter and display of metadata from a large corpus of communications add another tool to an already powerful analytic arsenal. As with any other powerful tool it needs to be used with caution.

Their cautionary tale touches on the value of metadata. After the Snowden revelations government officials like Dianne Feinstein have tried to reassure us that mining metadata shouldn’t be a concern because it isn’t content. Research like this shows what can be inferred from metadata.

Calling Bullshit: Syllabus

Each of our lectures will explore one specific facet of bullshit. For each week, a set of required readings are assigned. For some weeks, supplementary readings are also provided for those who wish to delve deeper.

On Twitter I came across this terrific syllabus: Calling Bullshit: Syllabus. The syllabus is learned, full of useful links, clear and funny. I wish I could write a syllabus like this. For example, here are some of the learning objectives:

  • Recognize said bullshit whenever and wherever you encounter it.
  • Figure out for yourself precisely why a particular bit of bullshit is bullshit.

What could be more important an objective in the humanities?

 

Teaching machines to understand text

Teaching machines to understand – and summarize – text is an article from the The Conversation about the use of machine learning in text summarization. The example they give is how machines could summarize software licenses in ways that would make them more meaningful to us. While these seems a potentially useful application I can’t help wondering why we don’t expect the licensors to summarize their licenses in ways that we can read. Or, barring that, why not make cartoon versions of the agreements like Terms and Conditions.

The issues raised by the use of computers in summarizing texts are many:

  • What is proposed would only work in a constrained situation like licenses where the machine can be trained to classify text following some sort of training set. It is unlikely to surprise you with poetry (not that it is meant to.)
  • The idea is introduced with the ultimate goal of reducing all the exabytes of data that we have to deal with. This is the “too much informationtrope again. The proposed solution doesn’t really deal with the problems that have beguiled us since we started complaining since part of the problem is too much information of unknown types. That is not to say that machine learning doesn’t have a place, but it won’t solve the underlying problem (again.)
  • How would the licensors react if we had tools to digest the text we have to deal with? The licensors will have to think about the legal liability (or advantage) of presenting text we won’t read, but which will be summarized for us. They might chose to be opaque to analytics to force us to read for ourselves.
  • Which raises the question of just what is the problem with too much information? Is it the expectation that we will consume it in some useful way? Is it that we have no time left for just thinking? Is it that we are constantly afraid that someone will have said something important already and we missed it?
  • A wise colleague asked what it would take for something to change us? Are we open to change when we think of too-much-information as something to be handled? Could machine learning become another wall in the interpretative ghetto we build around us?

CSDH 2017 conference

Last week I was at the Congress of the Humanities and Social Sciences attending the Canadian Society for Digital Humanities 2017 conference. (See the program here.) It was a brilliant conference organized by the folk at Ryerson. I loved being back in downtown Toronto. The closing keynote by Tracy Fullerton on Finer Fruits: A game as participatory text was fascinating. You can see my conference notes here.

Stéfan Sinclair and I were awarded the Outstanding Achievement Award for our work on Voyant and Hermeneutica. I was also involved in some of the presentations:

  • Todd Suomela presented a paper I contributed to on “GamerGate and Digital Humanities: Applying an Ethics of Care to Internet Research”
  • I presented a paper with Stéfan Sinclair on “The Beginnings of Content Analysis: From the General Inquirer to Sally Sedelow”.
  • Greg Whistance-Smith presented a demo/poster on “Methodi.ca: A Commons for Text Analysis Methods”
  • Jinman Zhang presented a demo/poster on our work on “Commenting, Gamification and Analytics in an Online Writing Environment: GWrit (Game of Writing)”