In response to unprecedented exigencies, more systemic solutions may be necessary and fully justifiable under fair use and fair dealing. This includes variants of controlled digital lending (CDL), in which books are scanned and lent in digital form, preserving the same one-to-one scarcity and time limits that would apply to lending their physical copies. Even before the new coronavirus, a growing number of libraries have implemented CDL for select physical collections.
To be honest, I am so tired of sitting on my butt that I plan to spend much more time walking to and browsing around the library at the University of Alberta. As much as digital access is a convenience, I’m missing the occasions for getting outside and walking that a library affords. Perhaps we should think of the library as a labyrinth – something deliberately difficult to navigate in order to give you an excuse to walk around.
Perhaps I need a book scanner on a standing desk at home to keep me on my feet.
Gerard Quinn’s cover for the December 1956 issue of New Worlds
Thanks to Ali I cam across this compilation of Adventures in Science Fiction Cover Art: Disembodied Brains. Joachim Boaz has assembled a number of pulp sci-fi cover art showing giant brains. The giant brain was often the way computing was imagined. In fact early computers were called giant brains.
Disembodied brains — in large metal womb-like containers, floating in space or levitating in the air (you know, implying PSYCHIC POWER), pulsating in glass chambers, planets with brain-like undulations, pasted in the sky (GOD!, surprise) above the Garden of Eden replete with mechanical contrivances among the flowers and butterflies and naked people… The possibilities are endless, and more often than not, taken in rather absurd directions.
I wonder if we can plot some of the early beliefs about computers through these images and stories of giant brains. What did we think the brain/mind was such that a big one would have exaggerated powers? The equation would go something like this:
A brain is the seat of intelligence
The bigger the brain, the more intelligent
In big brains we might see emergent properties (like telepathy)
Scaling up the brain will give us artificially effective intelligence
This is what science fiction does so well – it takes some aspect of current science or culture and scales it up to imagine the consequences. Scaling brains, however, seems a bit literal, but the imagined futures are nonetheless important.
As most of you know, I left Uber in December and joined Stripe in January. I’ve gotten a lot of questions over the past couple of months about why I left and what my time at Uber was like. It’s a strange, fascinating, and slightly horrifying story that deserves to be told while it is still fresh in my mind, so here we go.
In this codebook we will investigate the macro-structure of philosophical literature. As a base for our investigation I have collected about fifty-thousand reco
Stéfan sent me a link to this interesting post, The structure of recent philosophy (II) · Visualizations. Maximilian Noichl has done a fascinating job using the Web of Science to develop a model of the field of Philosophy since the 1950s. In this post he describes his method and the resulting visualization of clusters (see above). In a later post (version III of the project) he gets a more nuanced visualization that seems more true to the breadth of what people do in philosophy. The version above is heavily weighted to anglo-american analytic philosophy while version III has more history of philosophy and continental philosophy.
I’ve just come across some important blog essays by David Gaertner. One is Why We Need to Talk About Indigenous Literature in the Digital Humanities where he argues that colleagues from Indigenous literature are rightly skeptical of the digital humanities because DH hasn’t really taken to heart the concerns of Indigenous communities around the expropriation of data.
Bill Robinson has penned a nice essay Marking 70 years of eavesdropping in Canada. The essay gives the background of Canada’s signals intelligence unit, the Communications Security Establishment (CSE) which just marked its 70th anniversary (on Sept. 1st.)
Unable to read the Soviets’ most secret messages, the UKUSA allies resorted to plain-language (unencrypted) communications and traffic analysis, the study of the external features of messages such as sender, recipient, length, date and time of transmission—what today we call metadata. By compiling, sifting, and fusing a myriad of apparently unimportant facts from the huge volume of low-level Soviet civilian and military communications, it was possible to learn a great deal about the USSR’s armed forces, the Soviet economy, and other developments behind the Iron Curtain without breaking Soviet codes. Plain language and traffic analysis remained key sources of intelligence on the Soviet Bloc for much of the Cold War.
Robinson is particularly interesting on “The birth of metadata collection” as the Soviets frustrated developed encryption that couldn’t be broken.
3quarksdaily, one of my favourite sites to read just posted a very nice essay by Sanjukta Paul on Where Probability Meets Literature and Language: Markov Models for Text Analysis. The essay starts with Markov, who in the 19th century was doing linguistic analysis by hand and goes to authorship attribution by people like Fiona Tweedie (the image above is from a study she co-authored). It also explains markov models on the way.