Diggin’ in the Carts: Japanese video game music history

Meet the men and women responsible for creating the most iconic tunes in video game history.

We finished up the Replaying Japan 2021 conference today. The conference was online using Zoom and Gather Town where there was a hidden easter egg with a link to Diggin’ in the Carts: Japanese video game music history, a 5 part documentary from Red Bull that is quite good. The 5 15 minute episodes are part of the first season. Not sure if there will be other seasons, but there is a related radio show with multiple seasons. The documentary episodes nicely feature the composers and experts talking about the Japanese history along with other musicians commenting on the influence of the early music which would have been heard over and over in houses with Japanese consoles.

The creator of the show is Nick Dwyer who is interviewed here about the documentary and associated radio show..

Right Research: Modelling Sustainable Research Practices in the Anthropocene – Open Book Publishers

This timely volume responds to an increased demand for environmentally sustainable research, and is outstanding not only in its interdisciplinarity, but its embrace of non-traditional formats, spanning academic articles, creative acts, personal reflections and dialogues.

Open Book Publishers has just published the book I helped edit, Right Research: Modelling Sustainable Research Practices in the Anthropocene. The book gathers essays that came out of the last Around the World Conference that the Kule Institute for Advanced Research ran on Sustainable Research.

The Around the  World econferences we ran were experiments in trying to find a more sustainable way to meet and exchange ideas that involved less flying. It is good to see this book out in print.

Can’t Get You Out of My Head

I finally finished watching the BBC documentary series Can’t Get You Out of My Head by Adam Curtis. It is hard to describe this series which is cut entirely from archival footage with Curtis’ voice interpreting and linking the diverse clips. The subtitle is “An Emotional History of the Modern World” which is true in that the clips are often strangely affecting, but doesn’t convey the broad social-political connections Curtis makes in the narration. He is trying out a set of theses about recent history in China, the US, the UK, and Russia leading up to Brexit and Trump. I’m still digesting the 6 part series, but here are some of the threads of theses:

  • Conspiracies. He traces our fascination and now belief in conspiracies back to a memo by Jim Garrison in 1967 about the JFK assassination. The memo, Time and Propinquity: Factors in Phase I presents results of an investigative technique built on finding patterns of linkages between fragments of information. When you find strange coincidences you then weave a story (conspiracy) to join them rather than starting with a theory and checking the facts. This reminds me of what software like Palantir does – it makes (often coincidental) connections easy to find so you can tell stories. Curtis later follows the evolution of conspiracies as a political force leading to liberal conspiracies about Trump (that he was a Russian agent) and alt-right conspiracies like Q-Anon. We are all willing to surrender our independence of thought for the joys of conspiracies.
  • Big Data Surveillance and AI. Curtis connects this new mode of investigation to what the big data platforms like Google now do with AI. They gather lots of fragments of information about us and then a) use it to train AIs, and b) sell inferences drawn from the data to advertisers while keeping us anxious through the promotion of emotional content. Big data can deal with the complexity of the world which we have given up on trying to control. It promises to manage the complexity of fragments by finding patterns in them. This reminds me of discussions around the End of Theory and shift from theories to correlations.
  • Psychology. Curtis also connects this to emerging psychological theories about how our minds may be fragmented with different unconscious urges moving us. Psychology then offers ways to figure out what people really want and to nudge or prime them. This is what Cambridge Analytica promised – the ability to offer services we believed due to conspiracy theories. Curtis argues at the end that behavioural psychology can’t replicate many of the experiments undergirding nudging. Curtis suggests that all this big data manipulation doesn’t work though the platforms can heighten our anxiety and emotional stress. A particularly disturbing part of the last part is the discussion of how the US developed “enhanced” torture techniques based on these ideas after 9/11 to create “learned helplessness” in prisoners. The idea was to fragment their consciousness so that they would release a flood of these fragments, some of which might be useful intelligence.
  • Individualism. A major theme is the rise of individualism since the war and how individuals are controlled. China’s social credit model of explicit control through surveillance is contrasted to the Western consumer driven platform surveillance control. Either way, Curtis’ conclusion seems to be that we need to regain confidence in our own individual powers to choose our future and strive for it. We need to stop letting others control us with fear or distract us with consumption. We need to choose our future.

In some ways the series is a plea for everyone to make up their own stories from their fragmentary experience. The series starts with this quote,

The ultimate hidden truth of the world is that it is something we make, and could just as easily make differently. (David Graeber)

Of course, Curtis’ series could just be a conspiracy story that he wove out of the fragments he found in the BBC archives.

Addressing the Alarming Systems of Surveillance Built By Library Vendors

The Scholarly Publishing and Academic Resources Coalition (SPARC) are drawing attention to how we need to be Addressing the Alarming Systems of Surveillance Built By Library Vendors. This was triggered by a story in The Intercept that LexisNexis (is) to provide (a) giant database of personal information to ICE

The company’s databases offer an oceanic computerized view of a person’s existence; by consolidating records of where you’ve lived, where you’ve worked, what you’ve purchased, your debts, run-ins with the law, family members, driving history, and thousands of other types of breadcrumbs, even people particularly diligent about their privacy can be identified and tracked through this sort of digital mosaic. LexisNexis has gone even further than merely aggregating all this data: The company claims it holds 283 million distinct individual dossiers of 99.99% accuracy tied to “LexIDs,” unique identification codes that make pulling all the material collected about a person that much easier. For an undocumented immigrant in the United States, the hazard of such a database is clear. (The Intercept)

That LexisNexis has been building databases on people isn’t new. Sarah Brayne has a book about predictive policing titled Predict and Surveil where, among other things, she describes how the LAPD use Palantir and how police databases integrated in Palantir are enhanced by commercial databases like those sold by LexisNexis. (There is an essay that is an excerpt of the book here, Enter the Dragnet.)

I suspect environments like Palantir make all sorts of smaller and specialized databases more commercially valuable which is leading what were library database providers to expand their business. Before, a database about repossessions might be of interest to only a specialized community. Now it becomes linked to other information and is another dimension of data. In particular these databases provide information about all the people who aren’t in police databases. They provide the breadcrumbs needed to surveil those not documented elsewhere.

The SPARC call points out that we (academics, university libraries) have been funding these database providers. 

Dollars from library subscriptions, directly or indirectly, now support these systems of surveillance. This should be deeply concerning to the library community and to the millions of faculty and students who use their products each day and further underscores the urgency of privacy protections as library services—and research and education more generally—are now delivered primarily online.

This raises the question of our complicity and whether we could do without some of these companies. At a deeper level it raises questions about the curiosity of the academy. We are dedicated to knowledge as an unalloyed good and are at the heart of a large system of surveillance – surveillance of the past, of literature, of nature, of the cosmos, and of ourselves.

What was Gamergate? The lessons we still haven’t learned

Gamergate should have armed us against bad actors and bad-faith arguments. It didn’t.

Vox has an important article connecting the storming of the US Capitol with Gamergate, What was Gamergate? The lessons we still haven’t learned.  The point is that Gamergate and the storming are the visible symptoms of something deeper. I would go further and connect these with activities that progressives approve of like some of the Anonymous initiatives. For that matter, the recent populist retail investor campaign around stocks like GameStop has similar roots in new forms of organizing and new ironic ideologies.

Continue reading What was Gamergate? The lessons we still haven’t learned

Digital humanities – How data analysis can enrich the liberal arts

But despite data science’s exciting possibilities, plenty of other academics object to it

The Economist has a nice Christmas Special on the Digital humanities – How data analysis can enrich the liberal arts. The article tells a bit of our history (starting with Busa, of course) and gives examples of new work like that of Ted Underwood. The note criticism about how DH may be sucking up all the money or corrupting the humanities, but they also point out how little DH gets from the NEH pot (some $60m out of $16bn) which is hardly evidence of a take over. The truth is, as they note, that the humanities are under attack again and the digital humanities don’t make much of a difference either way. The neighboring fields that I see students moving to are media arts, communication studies and specializations like criminology. Those are the threats, but also sanctuaries for the humanities.

Blogging your research: Tips for getting started

Curious about research blogging, but not sure where to start?

Alice Fleerackersand Lupin Battersby of the ScholCommLab have put together a good post on Blogging your research: Tips for getting started. Despite being committed to blogging (this blog has been going since 2003) I must admit that I’m not sure blogging has the impact it once had. Twitter seems to have replaced blogging as a way to quickly share and follow research. Blog platforms, like WordPress have become project news and promotion systems.

What few talk about is how blogging can be a way of journaling for oneself. My blog certainly serves as a form of memory by and for myself. If only I search it (which I often do when I’m looking for information about something I knew but forgot) then it is still useful. Does everything in academia have to be about promotion and public impact?

In this age of fake news we seem to be back in the situation that Socrates and Gorgias sparred about in Plato’s Gorgias. Gorgias makes the point that the orator or, in today’s terms the communications specialist, can be more convincing than the scholar because they know how to “communicate”.

Socrates: Then the case is the same in all the other arts for the orator and his rhetoric: there is no need to know [459c] the truth of the actual matters, but one merely needs to have discovered some device of persuasion which will make one appear to those who do not know to know better than those who know.

Gorgias: Well, and is it not a great convenience, Socrates, to make oneself a match for the professionals by learning just this single art and omitting all the others? (Gorgias 459a)

It certainly feels like today there is a positive distrust of expertise such that the blatant lie, if repeated often enough, can convince those who want to hear the lie. Does communicating about our research have the beneficial effect we hope it does? Or, does it inflate our bubble without touching that of others?

Freedom Online Coalition joint statement on artificial intelligence

The Freedom Online Coalition (FOC) has issued a joint statement on artificial intelligence (AI) and human rights.  While the FOC acknowledges that AI systems offer unprecedented opportunities for human development and innovation, the Coalition expresses concern over the documented and ongoing use of AI systems towards repressive and authoritarian purposes, including through facial recognition technology […]

The Freedom Online Coalition is a coalition of countries including Canada that “work closely together to coordinate their diplomatic efforts and engage with civil society and the private sector to support Internet freedom – free expression, association, assembly, and privacy online – worldwide.” It was founded in 2011 at the initiative of the Dutch.

FOC has just released Joint Statement on Artificial Intelligence and Human Rights that calls for “transparency, traceability and accountability” in the design and deployment of AI systems. They also reaffirm that “states must abide by their obligations under international human rights law to ensure that human rights are fully respected and protected.” The statement ends with a series of recommendations or “Calls to action”.

What is important about this statement is the role of the state recommended. This is not a set of vapid principles that developers should voluntarily adhere to. It calls for appropriate legislation.

States should consider how domestic legislation, regulation and policies can identify, prevent, and mitigate risks to human rights posed by the design, development and use of AI systems, and take action where appropriate. These may include national AI and data strategies, human rights codes, privacy laws, data protection measures, responsible business practices, and other measures that may protect the interests of persons or groups facing multiple and intersecting forms of discrimination.

I note that yesterday the Liberals introduced a Digital Charter Implementation Act that could significantly change the regulations around data privacy. More on that as I read about it.

Thanks to Florence for pointing this FOC statement out to me.

How’s the Alberta PSE Re-Think Going?

Anyways, in sum: the emerging Alberta 2030 recommendations are for the most part banalities.  Not necessarily bad banalities – there are a lot of worthy ideas in there, just none which suggest any evidence of innovative thinking or actual learning from other jurisdictions.  But there are two obvious flashpoints, neither of which seems very promising ground for the government to launch fights.

Alex Usher has just posted How’s the Alberta PSE Re-Think Going? (Part 2) which, surprise, follows How’s the Alberta PSE Re-Think Going? (Part 1). Part 1 deals with whether the McKinsey review of Post-Secondary Education is worth the $3.7 million the province is paying for it. (It is not!) Part 2 looks at the recommendations.

What Usher doesn’t talk much about is the “Building Skill for Jobs” aspect of the whole exercise. The assumption is that PSE is all about giving students skills so they can get jobs. I also suspect that the skills imagined by the government are mostly those needed by the energy industry, even though there might not be the jobs in the future. As Usher puts it, “most UCP policy is a nostalgia play for the resource boom of 2004-2014”.

The two flashpoints Usher mentions are 1) a recommendation around deregulating tuition and then balancing that with needs-based financial aid. 2) The second is a recommendation to have fewer boards. Instead of a board for institution, there could be just one board for the research university sector.

We shall see.

Why Uber’s business model is doomed

Like other ridesharing companies, it made a big bet on an automated future that has failed to materialise, says Aaron Benanav, a researcher at Humboldt University

Aaron Benanav has an important opinion piece in The Guardian about Why Uber’s business model is doomed. Benanav argues that Uber and Lyft’s business model is to capture market share and then ditch the drivers they have employed for self-driving cars as they become reliable. In other words they are first disrupting the human taxi services so as to capitalize on driverless technology when it comes. Their current business is losing money as they feast on venture capital to get market share and if they can’t make the switch to driverless it is likely they go bankrupt.

This raises the question of whether we will see driverless technology good enough to oust the human drivers? I suspect that we will see it for certain geo-fenced zones where Uber and Lyft can pressure local governments to discipline the streets so as to be safe for driverless. In countries with chaotic and hard to accurately map streets (think medieval Italian towns) it may never work well enough.

All of this raises the deeper ethical issue of how driverless vehicles in particular and AI in general are being imagined and implemented. While there may be nothing unethical about driverless cars per se, there IS something unethical about a company deliberately bypassing government regulations, sucking up capital, driving out the small human taxi businesses, all in order to monopolize a market that they can then profit on by firing the drivers that got them there for driverless cars. Why is this the way AI is being commercialized rather than trying to create better public transit systems or better systems for helping people with disabilities? Who do we hold responsible for the decisions or lack of decisions that sees driverless AI technology implemented in a particularly brutal and illegal fashion. (See Benanav on the illegality of what Uber and Lyft are doing by forcing drivers to be self-employed contractors despite rulings to the contrary.)

It is this deeper set of issues around the imagination, implementation, and commercialization of AI that needs to be addressed. I imagine most developers won’t intentionally create unethical AIs, but many will create cool technologies that are commercialized by someone else in brutal and disruptive ways. Those commercializing and their financial backers (which are often all of us and our pension plans) will also feel no moral responsibility because we are just benefiting from (mostly) legal innovative businesses. Corporate social responsibility is a myth. At most corporate ethics is conceived of as a mix of public relations and legal constraints. Everything else is just fair game and the inevitable disruptions in the marketplace. Those who suffer are losers.

This then raises the issue of the ethics of anticipation. What is missing is imagination, anticipation and planning. If the corporate sector is rewarded for finding ways to use new technologies to game the system, then who is rewarded for planning for the disruption and, at a minimum, lessening the impact on the rest of us? Governments have planning units like city planning units, but in every city I’ve lived in these units are bypassed by real money from developers unless there is that rare thing – a citizen’s revolt. Look at our cities and their spread – despite all sorts of research and a history of spread, there is still very little discipline or planning to constrain the developers. In an age when government is seen as essentially untrustworthy planning departments start from a deficit of trust. Companies, entrepreneurs, innovation and yes, even disruption, are blessed with innocence as if, like children, they just do their thing and can’t be expected to anticipate the consequences or have to pick up after their play. We therefore wait for some disaster to remind everyone of the importance of planning and systems of resilience.

Now … how can teach this form of deeper ethics without sliding into political thought?