The Computer Literacy Project, on the other hand, is what a bunch of producers and civil servants at the BBC thought would be the best way to educate the nation about computing. I admit that it is a bit elitist to suggest we should laud this group of people for teaching the masses what they were incapable of seeking out on their own. But I can’t help but think they got it right. Lots of people first learned about computing using a BBC Micro, and many of these people went on to become successful software developers or game designers.
I’ve just discovered Two-Bit History (0b10), a series of long and thorough blog essays on the history of computing by Sinclair Target. One essay is on Codecademy vs. The BBC Micro. The essay gives the background of the BBC Computer Literacy Project that led the BBC to commission as suitable microcomputer, the BBC Micro. He uses this history to then compare the way the BBC literacy project taught a nation (the UK) computing to the way the Codeacademy does now. The BBC project comes out better as it doesn’t drop immediately into drop into programming without explaining, something the Codecademy does.
I should add that the early 1980s was a period when many constituencies developed their own computer systems, not just the BBC. In Ontario the Ministry of Education launched a process that led to the ICON which was used in Ontario schools in the mid to late 1980s.
But talk is just that—it’s not enough. For all the lip service paid to these issues, many organizations’ AI ethics guidelines remain vague and hard to implement.
Thanks to Oliver I came across this call for an end to ethics-washing by artificial intelligence reporter Karen Hao in the MIT Technology Review, In 2020, let’s stop AI ethics-washing and actually do somethingThe call echoes something I’ve been talking about – that we need to move beyond guidelines, lists of principles, and checklists. She nicely talks about some of the initiatives to hold AI accountable that are taking place and what should happen. Read on if you want to see what I think we need.
With the end of the year there are some great articles showing up reflecting on debacles of the decade. One of my favorites is The 100 Worst Ed-Tech Debacles of the Decade. Ed-Tech is one of those fields where over and over techies think they know better. Some of the debacles Watters discusses:
The “Flipped Classroom” (Full disclosure: I sat on a committee that funded these.)
This collection of 100 terrible ideas in instructional technology should be mandatory reading for all of us who have been keen on ed-tech. (And I am one who has develop ed-tech and oversold it.) Each item is a mini essay with links worth following.
Article: Applying an Ethics of Care to Internet Research: Gamergate and Digital Humanities
Thanks to Todd Suomela’s lead, we just published an article on Applying an Ethics of Care to Internet Research: Gamergate and Digital Humanitiesin Digital Studies. This article is a companion to an article I wrote with Bettina Berendt on Information Wants to Be Free, Or Does It? We and others are exploring the Ethics of Care as a different way of thinking about the ethics of digital humanities research.
Queer places are, by definition, sites of accretion, where stories, memories, and experiences are gathered. Queer place, in particular, is reliant on ephemeral histories, personal moments and memories. GoQueer intends to integrate these personal archives with places for you to discover.
I recently downloaded and started playing the iOS version of GoQueer from the App Store. It is a locative game from my colleague Dr. Maureen Engel.
Engel reflected about this project in a talk on YouTube titled Go Queer: A Ludic, Locative Media Experiment. Engel nicely theorizes her game not once, but in a doubled set of reflections show how theorizing isn’t a step in project design, but continuous thinking-through.
For Facebook, Google, and Twitter the fight against fake news seems to be two-pronged: De-incentivize the targeted content and provide avenues to correct factual inaccuracies. These are both surface fixes, however, akin to putting caulk on the Grand Canyon.
And, despite grand hand waves, both approaches are reactive. They don’t aim at understanding how this problem became prevalent, or creating a method that attacks the systemic issue. Instead these advertising giants implement new mechanisms by which people can report one-off issues—and by which the platforms will be left playing cat-and-mouse games against fake news—all the while giving no real clear glimpse into their opaque ad platforms.
The problem is that these companies make too much money from ads and elections are a chance to get lots of ads, manipulative or not. For that matter, what political ad doesn’t try to manipulate viewers?
The slashdot story was actually about Mozilla’s Responsible Computer Science Challenge which will support initiatives to embedd ethics in computer science courses. Alas, the efficacy of ethics courses is questionable. Aristotle would say that if you don’t have the disposition to be ethical no amount of training would do any good. It just helps the unethical pretend to be ethical.
One of the problems with e-conferences is that they are local for everyone which means that everyone tunes in and out depending on what they have scheduled rather than devoting the time. When you fly to a conference you can’t be expected to leave the conference for a meeting, but when a conference is local or online we tend to not pay attention as we would when afar.
This has to change if we are to wean ourselves of flying any time we want to pay attention to a conference. We have to learn to be deliberate about allocating time to an e-conference. We have to manifest attention.
Having just finished teaching a course on Big Data and Text Analysis where I taught students Python I can appreciate a well written tutorial on Python. Python Programming for the Humanities by Folgert Karsdorpis a great tutorial for humanists new to programming that takes the form of a series of Jupyter notebooks that students can download. As the tutorials are notebooks, if students have set up Python on their computers then they can use the tutorials interactively. Karsdorp has done a nice job of weaving in cells where the student has to code and Quizes which reinforce the materials which strikes me as an excellent use of the IPython notebook model.
Text Analysis with Topic Models for the Humanities and Social Sciences (TAToM) consists of a series of tutorials covering basic procedures in quantitative text analysis. The tutorials cover the preparation of a text corpus for analysis and the exploration of a collection of texts using topic models and machine learning.
Stéfan Sinclair and I (mostly Stéfan) have also produced a textbook for teaching programming to humanists called The Art of Literary Text Analysis. These tutorials are also written as Jupyter notebooks so you can download them and play with them.