“Excellence R Us”: university research and the fetishisation of excellence

Is “excellence” really the most efficient metric for distributing the resources available to the world’s scientists, teachers, and scholars? Does “excellence” live up to the expectations that academic communities place upon it? Is “excellence” excellent? And are we being excellent to each other in using it?

During the panel today on Journals in the digital age: penser de nouveaux modèles de publication en sciences humaines at CSDH-SCHN 2020 someone linked to an essay on  “Excellence R Us”: university research and the fetishisation of excellence in Palgrave Communications (2017). The essay does what should have been done some time ago, it questions the excellence of “excellence” as a value for everything in universities. The very overuse of “excellence” has devalued the concept. Surely much of what we do these days is “good enough” especially as our budgets are cut and cut.

The article has three major parts:

  • Rhetoric of excellence – it looks at how there is little consensus around what excellence between disciplines. Within disciplines it is negotiated and can become conservative.
  • Is “excellence” good for research – the second section argues that there is little correlation between forms of excellence review and long term metrics. They go on to outline some of the unfortunate side-effects of the push for excellence; how it can distort research and funding by promoting competition rather than collaboration. They also talk about how excellence disincentivizes replication – who wants to bother with replication if
  • Alternative narratives – the third section looks at alternative ways of distributing funding. They discuss looking at “soundness” and “capacity” as an alternatives to the winner-takes-all of excellence.

So much more could and should be addressed on this subject. I have often wondered about the effect of the success rates in grant programmes (percentage of applicants funded). When the success rate gets really low, as it is with many NEH programmes, it almost becomes a waste of time to apply and superstitions about success abound. SSHRC has healthier success rates that generally ensure that most researchers gets funded if they persist and rework their proposals.

Hypercompetition in turn leads to greater (we might even say more shameless …) attempts to perform this “excellence”, driving a circular conservatism and reification of existing power structures while harming rather than improving the qualities of the underlying activity.

Ultimately the “adjunctification” of the university, where few faculty get tenure, also leads to hypercompetition and an impoverished research environment. Getting tenure could end up being the most prestigious (and fundamental) of grants – the grant of a research career.

 

Google Developers Blog: Text Embedding Models Contain Bias. Here’s Why That Matters.

Human data encodes human biases by default. Being aware of this is a good start, and the conversation around how to handle it is ongoing. At Google, we are actively researching unintended bias analysis and mitigation strategies because we are committed to making products that work well for everyone. In this post, we’ll examine a few text embedding models, suggest some tools for evaluating certain forms of bias, and discuss how these issues matter when building applications.

On the Google Developvers Blog there is an interesting post on Text Embedding Models Contain Bias. Here’s Why That Matters. The post talks about a technique for using Word Embedding Association Tests (WEAT) to see compare different text embedding algorithms. The idea is to see whether groups of words like gendered words associate with positive or negative words. In the image above you can see the sentiment bias for female and male names for different techniques.

While Google is working on WEAT to try to detect and deal with bias, in our case this technique could be used to identify forms of bias in corpora.

The Viral Virus

Graph of word "test*" over time
Relative Frequency of word “test*” over time

Analyzing the Twitter Conversation Surrounding COVID-19

From Twitter I found out about this excellent visual essay on The Viral Virus by Kate Appel from May 6, 2020. Appel used Voyant to study highly retweeted tweets from January 20th to April 23rd. She divided the tweets into weeks and then used the distinctive words (tf-idf) tool to tell a story about the changing discussion about Covid-19. As you scroll down you see lists of distinctive words and supporting images. At the end she shows some of the topics gained from topic modelling. It is a remarkably simple, but effective use of Voyant.

COVID-19 contact tracing reveals ethical tradeoffs between public health and privacy

Michael Brown has written a nice article in the U of Alberta folio on COVID-19 contact tracing reveals ethical tradeoffs between public health and privacyThe article quotes me extensively on the subject of the ethics of these new bluetooth contact tracing tools. In the interview I tried the emphasize the importance of knowledge and consent.

  • Users of these apps should know that they are being traced through them, and
  • Users should consent to their use.

There are a variety of these apps from the system pioneered by Singapore called TraceTogether to its Alberta cousin ABTraceTogether. There are also a variety of approaches to tracing people from using credit card records to apps like TraceTogether. The EFF has a good essay on Protecting Civil Rights During a Public Health Crisis that I adapt here to provide guidelines for when one might gather data without knowledge or consent:

  • Medically necessary: There should be a clear and compelling explanation as to how this will save lives.
  • Personal information proportionate to need: The information gathered should fit the need and go no further.
  • Information handled by health informatics specialists: The gathering and processing should be handled by health informatics units, not signals intelligence or security services.
  • Deleted: It should be deleted once it is no longer needed.
  • Not be organized due to vulnerable demographics: The information should not be binned according to stereotypical or vulnerable demographics unless there is a compelling need. We should be very careful that we don’t use the data to further disadvantage groups.
  • Use reviewed afterwards: The should be a review after the crisis is over.
  • Transparency: Government should transparent about what they are gathering and why.
  • Due process: There should be open processes for people to challenge the gathering of their information or to challenge decisions taken as a result of such information.

Robots Welcome to Take Over, as Pandemic Accelerates Automation – The New York Times

But labor and robotics experts say social-distancing directives, which are likely to continue in some form after the crisis subsides, could prompt more industries to accelerate their use of automation. And long-simmering worries about job losses or a broad unease about having machines control vital aspects of daily life could dissipate as society sees the benefits of restructuring workplaces in ways that minimize close human contact.

The New York Times has a story pointing out that The Robots Welcome to Take Over, as Pandemic Accelerates Automation. While AI may not be that useful in making the crisis decisions, robots (and the AIs that drive them) can take over certain jobs that need doing, but which are dangerous to humans in a time of pandemic. Sorting trash is one example given. Cleaning spaces is another.

We can imagine a dystopia where everything can run just fine with social (physical) distancing. Ultimately humans would only do the creative intellectual work as imagined in Forester’s The Machine Stops (from 1909!) We would entertain each other with solitary interventions, or at least works that can be made with the artists far apart. Perhaps green-screen technology and animation will let us even act alone and be composited together into virtual crowds.

Digitization in an Emergency: Fair Use/Fair Dealing and How Libraries Are Adapting to the Pandemic

In response to unprecedented exigencies, more systemic solutions may be necessary and fully justifiable under fair use and fair dealing. This includes variants of controlled digital lending (CDL), in which books are scanned and lent in digital form, preserving the same one-to-one scarcity and time limits that would apply to lending their physical copies. Even before the new coronavirus, a growing number of libraries have implemented CDL for select physical collections.

The Association of Research Libraries has a blog entry on Digitization in an Emergency: Fair Use/Fair Dealing and How Libraries Are Adapting to the Pandemic by Ryan Clough (April 1, 2020) with good links. The closing of the physical libraries has accelerated a process of moving from a hybrid of physical and digital resources to an entirely digital library. Controlled digital lending (where only a limited number of patrons can read an digital asset at a time) seems a sensible way to go.

To be honest, I am so tired of sitting on my butt that I plan to spend much more time walking to and browsing around the library at the University of Alberta. As much as digital access is a convenience, I’m missing the occasions for getting outside and walking that a library affords. Perhaps we should think of the library as a labyrinth – something deliberately difficult to navigate in order to give you an excuse to walk around.

Perhaps I need a book scanner on a standing desk at home to keep me on my feet.

How useful is AI in a pandemic?

DER SPIEGEL: What are the lessons to be learned from this crisis

Dräger: It shows that common sense is more important than we all thought. This situation is so new and complicated that the problems can only be solved by people who carefully weigh their decisions. Artificial intelligence, which everyone has been talking so much about recently, isn’t much help at the moment.

Absolutely Mission Impossible: Interview with German Ventilator Manufacturer, Speigel International, Interviewed by Lukas Eberle und Martin U. Müller, March 27th, 2020.

There are so many lessons to be learned from the Coronavirus, but one lesson is that artificial intelligence isn’t always the solution. In a health crisis that has to do with viruses in the air, not information, AI is only indirectly useful. As the head of production of the German Drägerwerk ventilator manufacturer company puts it, the challenge of choosing who to sell ventilators to in this time is not one to handed over to an AI. Humans carefully weighing decisions (and taking responsibility for them) is what is needed in a crisis.

(Against) Virus as Metaphor

Our fondness for viruses as metaphor may have kept us from insisting on and observing the standards and practices that would prevent their spread.

Paul Elie in the New Yorker has a comment (Against) Virus as Metaphor (March 19, 2020) where he argues that our habit of using viruses as a metaphor is dangerous. He draws on Susan Sontag’s Illness as Metaphor to discuss how using the virus as metaphor can end up both misleading us about what is happening on the internet with ideas and memes, but can also cast a moral shadow back onto those who have the real disease. It is tempting to blame those with diseases for moral faults that presumably made them more vulnerable to the disease. The truth is that diseases like viruses pay no attention to our morals. There is nothing socially constructed or deconstructed to the Coronavirus. It wasn’t invented by people but it has real consequences for people. We have to be careful not to ascribe human agency to it.

Continue reading (Against) Virus as Metaphor

Adventures in Science Fiction Cover Art: Disembodied Brains, Part I | Science Fiction and Other Suspect Ruminations

Gerard Quinn’s cover for the December 1956 issue of New Worlds

Thanks to Ali I cam across this compilation of Adventures in Science Fiction Cover Art: Disembodied Brains. Joachim Boaz has assembled a number of pulp sci-fi cover art showing giant brains. The giant brain was often the way computing was imagined. In fact early computers were called giant brains.

Disembodied brains — in large metal womb-like containers, floating in space or levitating in the air (you know, implying PSYCHIC POWER), pulsating in glass chambers, planets with brain-like undulations, pasted in the sky (GOD!, surprise) above the Garden of Eden replete with mechanical contrivances among the flowers and butterflies and naked people… The possibilities are endless, and more often than not, taken in rather absurd directions.

I wonder if we can plot some of the early beliefs about computers through these images and stories of giant brains. What did we think the brain/mind was such that a big one would have exaggerated powers? The equation would go something like this:

  • A brain is the seat of intelligence
  • The bigger the brain, the more intelligent
  • In big brains we might see emergent properties (like telepathy)
  • Scaling up the brain will give us artificially effective intelligence

This is what science fiction does so well – it takes some aspect of current science or culture and scales it up to imagine the consequences. Scaling brains, however, seems a bit literal, but the imagined futures are nonetheless important.

Here’s the File Clearview AI Has Been Keeping on Me, and Probably on You Too – VICE

We used the California Consumer Privacy Act to see what information the controversial facial recognition company has collected on me.

Anna Merlan has an important story on Vice, Here’s the File Clearview AI Has Been Keeping on Me, and Probably on You Too (Feb. 28, 2020). She used the California privacy laws to ask Clearview AI what information they kept on her and then to delete it. They asked her for a photo and proof of identification and eventually sent her a set of images and an index of where they came from. What is interesting is that they aren’t just scraping social media, they are scraping other scrapers like Insta Stalkers and various right wing sources that presumably have photos and stories about “dangerous intellectuals” like Merlan.

This bring back up the question of what is so unethical about face recognition and the storage of biometrics. We all have pictures of people in our photo collections, and Clearview AI was scraping public photos – is it then the use of the images that is the problem? Is it the recognition and search capability.