Digital humanities – How data analysis can enrich the liberal arts

But despite data science’s exciting possibilities, plenty of other academics object to it

The Economist has a nice Christmas Special on the Digital humanities – How data analysis can enrich the liberal arts. The article tells a bit of our history (starting with Busa, of course) and gives examples of new work like that of Ted Underwood. The note criticism about how DH may be sucking up all the money or corrupting the humanities, but they also point out how little DH gets from the NEH pot (some $60m out of $16bn) which is hardly evidence of a take over. The truth is, as they note, that the humanities are under attack again and the digital humanities don’t make much of a difference either way. The neighboring fields that I see students moving to are media arts, communication studies and specializations like criminology. Those are the threats, but also sanctuaries for the humanities.

Blogging your research: Tips for getting started

Curious about research blogging, but not sure where to start?

Alice Fleerackersand Lupin Battersby of the ScholCommLab have put together a good post on Blogging your research: Tips for getting started. Despite being committed to blogging (this blog has been going since 2003) I must admit that I’m not sure blogging has the impact it once had. Twitter seems to have replaced blogging as a way to quickly share and follow research. Blog platforms, like WordPress have become project news and promotion systems.

What few talk about is how blogging can be a way of journaling for oneself. My blog certainly serves as a form of memory by and for myself. If only I search it (which I often do when I’m looking for information about something I knew but forgot) then it is still useful. Does everything in academia have to be about promotion and public impact?

In this age of fake news we seem to be back in the situation that Socrates and Gorgias sparred about in Plato’s Gorgias. Gorgias makes the point that the orator or, in today’s terms the communications specialist, can be more convincing than the scholar because they know how to “communicate”.

Socrates: Then the case is the same in all the other arts for the orator and his rhetoric: there is no need to know [459c] the truth of the actual matters, but one merely needs to have discovered some device of persuasion which will make one appear to those who do not know to know better than those who know.

Gorgias: Well, and is it not a great convenience, Socrates, to make oneself a match for the professionals by learning just this single art and omitting all the others? (Gorgias 459a)

It certainly feels like today there is a positive distrust of expertise such that the blatant lie, if repeated often enough, can convince those who want to hear the lie. Does communicating about our research have the beneficial effect we hope it does? Or, does it inflate our bubble without touching that of others?

Freedom Online Coalition joint statement on artificial intelligence

The Freedom Online Coalition (FOC) has issued a joint statement on artificial intelligence (AI) and human rights.  While the FOC acknowledges that AI systems offer unprecedented opportunities for human development and innovation, the Coalition expresses concern over the documented and ongoing use of AI systems towards repressive and authoritarian purposes, including through facial recognition technology […]

The Freedom Online Coalition is a coalition of countries including Canada that “work closely together to coordinate their diplomatic efforts and engage with civil society and the private sector to support Internet freedom – free expression, association, assembly, and privacy online – worldwide.” It was founded in 2011 at the initiative of the Dutch.

FOC has just released Joint Statement on Artificial Intelligence and Human Rights that calls for “transparency, traceability and accountability” in the design and deployment of AI systems. They also reaffirm that “states must abide by their obligations under international human rights law to ensure that human rights are fully respected and protected.” The statement ends with a series of recommendations or “Calls to action”.

What is important about this statement is the role of the state recommended. This is not a set of vapid principles that developers should voluntarily adhere to. It calls for appropriate legislation.

States should consider how domestic legislation, regulation and policies can identify, prevent, and mitigate risks to human rights posed by the design, development and use of AI systems, and take action where appropriate. These may include national AI and data strategies, human rights codes, privacy laws, data protection measures, responsible business practices, and other measures that may protect the interests of persons or groups facing multiple and intersecting forms of discrimination.

I note that yesterday the Liberals introduced a Digital Charter Implementation Act that could significantly change the regulations around data privacy. More on that as I read about it.

Thanks to Florence for pointing this FOC statement out to me.

How’s the Alberta PSE Re-Think Going?

Anyways, in sum: the emerging Alberta 2030 recommendations are for the most part banalities.  Not necessarily bad banalities – there are a lot of worthy ideas in there, just none which suggest any evidence of innovative thinking or actual learning from other jurisdictions.  But there are two obvious flashpoints, neither of which seems very promising ground for the government to launch fights.

Alex Usher has just posted How’s the Alberta PSE Re-Think Going? (Part 2) which, surprise, follows How’s the Alberta PSE Re-Think Going? (Part 1). Part 1 deals with whether the McKinsey review of Post-Secondary Education is worth the $3.7 million the province is paying for it. (It is not!) Part 2 looks at the recommendations.

What Usher doesn’t talk much about is the “Building Skill for Jobs” aspect of the whole exercise. The assumption is that PSE is all about giving students skills so they can get jobs. I also suspect that the skills imagined by the government are mostly those needed by the energy industry, even though there might not be the jobs in the future. As Usher puts it, “most UCP policy is a nostalgia play for the resource boom of 2004-2014”.

The two flashpoints Usher mentions are 1) a recommendation around deregulating tuition and then balancing that with needs-based financial aid. 2) The second is a recommendation to have fewer boards. Instead of a board for institution, there could be just one board for the research university sector.

We shall see.

Why Uber’s business model is doomed

Like other ridesharing companies, it made a big bet on an automated future that has failed to materialise, says Aaron Benanav, a researcher at Humboldt University

Aaron Benanav has an important opinion piece in The Guardian about Why Uber’s business model is doomed. Benanav argues that Uber and Lyft’s business model is to capture market share and then ditch the drivers they have employed for self-driving cars as they become reliable. In other words they are first disrupting the human taxi services so as to capitalize on driverless technology when it comes. Their current business is losing money as they feast on venture capital to get market share and if they can’t make the switch to driverless it is likely they go bankrupt.

This raises the question of whether we will see driverless technology good enough to oust the human drivers? I suspect that we will see it for certain geo-fenced zones where Uber and Lyft can pressure local governments to discipline the streets so as to be safe for driverless. In countries with chaotic and hard to accurately map streets (think medieval Italian towns) it may never work well enough.

All of this raises the deeper ethical issue of how driverless vehicles in particular and AI in general are being imagined and implemented. While there may be nothing unethical about driverless cars per se, there IS something unethical about a company deliberately bypassing government regulations, sucking up capital, driving out the small human taxi businesses, all in order to monopolize a market that they can then profit on by firing the drivers that got them there for driverless cars. Why is this the way AI is being commercialized rather than trying to create better public transit systems or better systems for helping people with disabilities? Who do we hold responsible for the decisions or lack of decisions that sees driverless AI technology implemented in a particularly brutal and illegal fashion. (See Benanav on the illegality of what Uber and Lyft are doing by forcing drivers to be self-employed contractors despite rulings to the contrary.)

It is this deeper set of issues around the imagination, implementation, and commercialization of AI that needs to be addressed. I imagine most developers won’t intentionally create unethical AIs, but many will create cool technologies that are commercialized by someone else in brutal and disruptive ways. Those commercializing and their financial backers (which are often all of us and our pension plans) will also feel no moral responsibility because we are just benefiting from (mostly) legal innovative businesses. Corporate social responsibility is a myth. At most corporate ethics is conceived of as a mix of public relations and legal constraints. Everything else is just fair game and the inevitable disruptions in the marketplace. Those who suffer are losers.

This then raises the issue of the ethics of anticipation. What is missing is imagination, anticipation and planning. If the corporate sector is rewarded for finding ways to use new technologies to game the system, then who is rewarded for planning for the disruption and, at a minimum, lessening the impact on the rest of us? Governments have planning units like city planning units, but in every city I’ve lived in these units are bypassed by real money from developers unless there is that rare thing – a citizen’s revolt. Look at our cities and their spread – despite all sorts of research and a history of spread, there is still very little discipline or planning to constrain the developers. In an age when government is seen as essentially untrustworthy planning departments start from a deficit of trust. Companies, entrepreneurs, innovation and yes, even disruption, are blessed with innocence as if, like children, they just do their thing and can’t be expected to anticipate the consequences or have to pick up after their play. We therefore wait for some disaster to remind everyone of the importance of planning and systems of resilience.

Now … how can teach this form of deeper ethics without sliding into political thought?

The Man Behind Trump’s Facebook Juggernaut

Brad Parscale used social media to sway the 2016 election. He’s poised to do it again.

I just finished reading important reporting about The Man Behind Trump’s Facebook Juggernaut in the March 9th, 2020 issue of the New Yorker. The long article suggests that it wasn’t Cambridge Analytica or the Russians who swung the 2016 election. If anything had an impact it was the extensive use of social media, especially Facebook, by the Trump digital campaign under the leadership of Brad Parscale. The Clinton campaign focused on TV spots and believed they were going to win. The Trump campaign gathered lots of data, constantly tried new things, and drew on their Facebook “embed” to improve their game.

If each variation is counted as a distinct ad, then the Trump campaign, all told, ran 5.9 million Facebook ads. The Clinton campaign ran sixty-six thousand. “The Hillary campaign thought they had it in the bag, so they tried to play it safe, which meant not doing much that was new or unorthodox, especially online,” a progressive digital strategist told me. “Trump’s people knew they didn’t have it in the bag, and they never gave a shit about being safe anyway.” (p. 49)

One interesting service Facebook offered was “Lookalike Audiences” where you could upload a spotty list of information about people and Facebook would first fill it out from their data and then find you more people who are similar. This lets you expand your list of people to microtarget (and Facebook gets you paying for more targeted ads.)

The end of the article gets depressing as it recounts how little the Democrats are doing to counter or match the social media campaign for Trump which was essentially underway right after the 2016 election. One worries, by the end, that we will see a repeat.

Marantz, Andrew. (2020, March 9). “#WINNING: Brad Parscale used social media to sway the 2016 election. He’s posed to do it again.” New Yorker. Pages 44-55.

Philosophers On GPT-3

GPT-3 raises many philosophical questions. Some are ethical. Should we develop and deploy GPT-3, given that it has many biases from its training, it may displace human workers, it can be used for deception, and it could lead to AGI? I’ll focus on some issues in the philosophy of mind. Is GPT-3 really intelligent, and in what sense? Is it conscious? Is it an agent? Does it understand?

On the Daily Nous (news by and for philosophers) there is a great collection of short essays on OpenAI‘s recently released API to GPT-3, see Philosophers On GPT-3 (updated with replies by GPT-3). And … there is a response from GPT-3. Some of the issues raised include:

Ethics: David Chalmers raises the inevitable ethics issues. Remember that GPT-2 was considered so good as to be dangerous. I don’t know if it is brilliant marketing or genuine concern, but OpenAI continuing to treat this technology as something to be careful about. Here is Chalmers on ethics,

GPT-3 raises many philosophical questions. Some are ethical. Should we develop and deploy GPT-3, given that it has many biases from its training, it may displace human workers, it can be used for deception, and it could lead to AGI? I’ll focus on some issues in the philosophy of mind. Is GPT-3 really intelligent, and in what sense? Is it conscious? Is it an agent? Does it understand?

Annette Zimmerman in her essay makes an important point about the larger justice context of tools like GPT-3. It is not just a matter of ironing out the biases in the language generated (or used in training.) It is not a matter of finding a techno-fix that makes bias go away. It is about care.

Not all uses of AI, of course, are inherently objectionable, or automatically unjust—the point is simply that much like we can do things with words, we can dothings with algorithms and machine learning models. This is not purely a tangibly material distributive justice concern: especially in the context of language models like GPT-3, paying attention to other facets of injustice—relational, communicative, representational, ontological—is essential.

She also makes an important and deep point that any AI application will have to make use of concepts from the application domain and all of these concepts will be contested. There are no simple concepts just as there are no concepts that don’t change over time.

Finally, Shannon Vallor has an essay that revisits Hubert Dreyfus’s critique of AI as not really understanding.

Understanding is beyond GPT-3’s reach because understanding cannot occur in an isolated behavior, no matter how clever. Understanding is not an act but a labor.

 

The International Review of Information Ethics

The International Review of Information Ethics (IRIE) has just published Volume 28 which collects papers on Artificial Intelligence, Ethics and Society. This issue comes from the AI, Ethics and Society conference that the Kule Institute for Advanced Study (KIAS) organized.

This issue of the IRIE also marks the first issue published on the PKP platform managed by the University of Alberta Library. KIAS is supporting the transition of the journal over to the new platform as part of its focus on AI, Ethics and Society in partnership with the AI for Society signature area.

We are still ironing out all the bugs and missing links, so bear with us, but the platform is solid and the IRIE is now positioned to sustainably publish original research in this interdisciplinary area.

A Letter on Justice and Open Debate

The free exchange of information and ideas, the lifeblood of a liberal society, is daily becoming more constricted. While we have come to expect this on the radical right, censoriousness is also spreading more widely in our culture: an intolerance of opposing views, a vogue for public shaming and ostracism, and the tendency to dissolve complex policy issues in a blinding moral certainty. We uphold the value of robust and even caustic counter-speech from all quarters. But it is now all too common to hear calls for swift and severe retribution in response to perceived transgressions of speech and thought. More troubling still, institutional leaders, in a spirit of panicked damage control, are delivering hasty and disproportionate punishments instead of considered reforms. Editors are fired for running controversial pieces; books are withdrawn for alleged inauthenticity; journalists are barred from writing on certain topics; professors are investigated for quoting works of literature in class; a researcher is fired for circulating a peer-reviewed academic study; and the heads of organizations are ousted for what are sometimes just clumsy mistakes. 

Harper’s has published A Letter on Justice and Open Debate that is signed by all sorts of important people from Salman Rushdie, Margaret Atwood to J.K. Rowling. The letter is critical of what might be called “cancel culture.”

The letter itself has been critiqued for coming from privileged writers who don’t experience the daily silencing of racism or other forms of prejudice. See the Guardian Is free speech under threat from ‘cancel culture’? Four writers respond for different responses to the letter, both critical and supportive.

This issue doesn’t seem to me that new. We have been struggling for some time with issues around the tolerance of intolerance. There is a broad range of what is considered tolerable speech and, I think, everyone would agree that there is also intolerable speech that doesn’t merit airing and countering. The problem is knowing where the line is.

That which is missing on the internet is a sense of dialogue. Those who speechify (including me in blog posts like this) do so without entering into dialogue with anyone. We are all broadcasters; many without much of an audience. Entering into dialogue, by contrast, carries commitments to continue the dialogue, to listen, to respect and to work for resolution. In the broadcast chaos all you can do is pick the stations you will listen to and cancel the others.

Call for Papers for Replaying Japan Journal, Issue 3

The Replaying Japan Journal has issued a call for papers for Issue 3 with a deadline of September 30th, 2020. See the Current Call for Papers – Replaying Japan. The RJJ publishes original research papers on Japanese videogames, game culture and related media. We also publish translations, research notes, and reviews.

The RJJ is available online and in print, published by the Ritsumeikan (University) Center for Game Studies (See the RCGS English Pamphlet too). Inaba, Mitsuyuki is the Editor in Chief and Fukuda, Kazafumi is the Associate Editor. I and Jérémie Pelletier-Gagnon are the English Editors.

Articles in either Japanese or English are accepted. The Japanese Call for Papers is here.