The Atlantic has a good article on What Facebook Did to American Democracy by Alexis C. Madrigal (Oct. 12, 2017). What is interesting is how we assumed that the net and social media could affect things and that is would naturally benefit the left.
The research showed that a small design change by Facebook could have electoral repercussions, especially with America’s electoral-college format in which a few hotly contested states have a disproportionate impact on the national outcome. And the pro-liberal effect it implied became enshrined as an axiom of how campaign staffers, reporters, and academics viewed social media.
The story of networked politics seems to be that of tactics being developed by the left and promoted as democratizing and inclusive that are then reused by the right. It is also the story of how Facebook decided to own news and fake news. They created filter bubbles so that none of knew what the other was thinking.
From Teaching in a Digital Age by A. W. Bates I came across this 1954 video of Skinner explaining his Teaching Machine inspired by behaviourism. The machine runs a paper script, but it isn’t that different from the computer based drill training today. You get a question, you write your answer, and you get feedback.
Wired Magazine has a nice essay on Hey, Computer Scientists! Stop Hating on the Humanities. The essay by a computer scientist argues that CS students need to study the ethical and social implications of what they build. It can’t be left to others because then it will be too late. Further, CS students should be scared a little:
Professors need to scare their students, to make them feel they’ve been given the skills not just to get rich but to wreck lives; they need to humble them, to make them realize that however good they might be at math, there’s still so much they don’t know.
This year I kept notes about the Digital Humanities 2017 conference at McGill. See DH 2017 Conference Report. My conference report also covers the New Scholars Symposium that took place before.
The NSS is supported by CHCI and centerNet. KIAS provided administrative support and the ACH provided coffee and snacks on the day of. We were lucky to have so many groups supporting the NSS which in turn supports new scholars to come to the conference and to articulate their issues in an unconference format.
DH 2017 itself was a rich feast of ideas. There was too much going on to summarize in a paragraph, but here are two highlights:
We had an opening keynote in French from Marin Dacos. He talked about the “Unexpected Reader” that one gets when publications are open.
We had a great closing keynote by Elizabeth Guffey on “The Upside-Down Politics of Access in the Digital Age” that asked about access for disabled people in the digital realm.
The participants of the New Scholars Symposium identified the following as topics to watch and think about:
Finally, this is a cautionary tale. The collection and storage of metadata from any individual in our society should be of concern to all of us. While it is possible to discern patterns from several sources, it is also far too easy to construct a false narrative, particularly one that ﬁts an already held point of view. As analysts, we fall prey to our cognitive biases. Interactive ﬁlter and display of metadata from a large corpus of communications add another tool to an already powerful analytic arsenal. As with any other powerful tool it needs to be used with caution.
Their cautionary tale touches on the value of metadata. After the Snowden revelations government officials like Dianne Feinstein have tried to reassure us that mining metadata shouldn’t be a concern because it isn’t content. Research like this shows what can be inferred from metadata.
Each of our lectures will explore one specific facet of bullshit. For each week, a set of required readings are assigned. For some weeks, supplementary readings are also provided for those who wish to delve deeper.
On Twitter I came across this terrific syllabus: Calling Bullshit: Syllabus.The syllabus is learned, full of useful links, clear and funny. I wish I could write a syllabus like this. For example, here are some of the learning objectives:
Recognize said bullshit whenever and wherever you encounter it.
Figure out for yourself precisely why a particular bit of bullshit is bullshit.
What could be more important an objective in the humanities?
Teaching machines to understand – and summarize – text is an article from the The Conversation about the use of machine learning in text summarization. The example they give is how machines could summarize software licenses in ways that would make them more meaningful to us. While these seems a potentially useful application I can’t help wondering why we don’t expect the licensors to summarize their licenses in ways that we can read. Or, barring that, why not make cartoon versions of the agreements like Terms and Conditions.
The issues raised by the use of computers in summarizing texts are many:
What is proposed would only work in a constrained situation like licenses where the machine can be trained to classify text following some sort of training set. It is unlikely to surprise you with poetry (not that it is meant to.)
The idea is introduced with the ultimate goal of reducing all the exabytes of data that we have to deal with. This is the “too much information” trope again. The proposed solution doesn’t really deal with the problems that have beguiled us since we started complaining since part of the problem is too much information of unknown types. That is not to say that machine learning doesn’t have a place, but it won’t solve the underlying problem (again.)
How would the licensors react if we had tools to digest the text we have to deal with? The licensors will have to think about the legal liability (or advantage) of presenting text we won’t read, but which will be summarized for us. They might chose to be opaque to analytics to force us to read for ourselves.
Which raises the question of just what is the problem with too much information? Is it the expectation that we will consume it in some useful way? Is it that we have no time left for just thinking? Is it that we are constantly afraid that someone will have said something important already and we missed it?
A wise colleague asked what it would take for something to change us? Are we open to change when we think of too-much-information as something to be handled? Could machine learning become another wall in the interpretative ghetto we build around us?