The Machine Stops

Imagine, if you can, a small room, hexagonal in shape, like the cell of a bee. It is lighted neither by window nor by lamp, yet it is filled with a soft radiance. There are no apertures for ventilation, yet the air is fresh. There are no musical instruments, and yet, at the moment that my meditation opens, this room is throbbing with melodious sounds. An armchair is in the centre, by its side a reading-desk — that is all the furniture. And in the armchair there sits a swaddled lump of flesh — a woman, about five feet high, with a face as white as a fungus. It is to her that the little room belongs.

Like many, I reread E.M. Forester’s The Machine Stops this week while in isolation. This short story was published in 1909 and written as a reaction to The Time Machine by H.G. Wells. (See the full text here (PDF).) In Forester it is the machine that keeps working the utopia of isolated pods; in Wells it is a caste of workers, the Morlochs, who also turn out to eat the leisure class.  Forester felt that technology was likely to be the problem, or part of the problem, not class.

In this pandemic we see a bit of both. Following Wells we see a class of gig-economy deliverers who facilitate the isolated life of those of us who do intellectual work. Intellectual work has gone virtual, but we still need a physical layer maintained. (Even the language of a stack of layers comes metaphorically from computing.) But we also see in our virtualized work a dependence on an information machine that lets our bodies sit on the couch in isolation while we listen to throbbing melodies. My body certainly feels like it is settling into a swaddled lump of fungus.

An intriguing aspect of “The Machine Stops” is how Vashti, the mother who loves the life of the machine, measures everything in terms of ideas. She complains that flying to see her son and seeing the earth below gives her no ideas. Ideas don’t come from original experiences but from layers of interpretation. Ideas are the currency of an intellectual life of leisure which loses touch with the “real world.”

At the end, as the machine stops and Kuno, Vashti’s son, comes to his mother in the disaster, they reflect on how a few homeless refugees living on the surface might survive and learn not to trust the machine.

“I have seen them, spoken to them, loved them. They are hiding in the mist and the ferns until our civilization stops. To-day they are the Homeless — to-morrow—”

“Oh, to-morrow — some fool will start the Machine again, to-morrow.”

“Never,” said Kuno, “never. Humanity has learnt its lesson.”

 

2020 Brings the Death of IT | I, Cringely

It’s the end of IT because your device will no longer contain anything so it can be simply replaced via Amazon if it is damaged or lost, with the IT kid in the white shirt becoming an Uber driver.

How many of us have laughed at The IT Crowd? I remember when I was in support at the University of Toronto and would advise people to turn their computer off and back on. Suprisingly that actually helped in some cases, as did wiggling the cable to the printer (back when there were lots of pins.) Robert X. Cringely, who is apparently not the only Cringely, has a prediction that 2020 Brings the Death of IT in his I, Cringely site. He predicts that all of us working at home in isolation is going to accelerate a computing paradigm called SASE (Secure Access Service Edge – pronounced “sassy”) where individual devices are connected to cloud-based services. IT will disappear because to fix something you will just order another from Amazon. There will be no fixing the local, just replacing it. The rest is all up in the cloud and maintained by someone like Google. Locally we just have appliances.

Welcome to Dialogica: Thinking-Through Voyant!

Do you need online teaching ideas and materials? Dialogica was supposed to be a text book, but instead we are adapting it for use in online learning and self-study. It is shared here under a CC BY 4.0 license so you can adapt as needed.

Stéfan Sinclair and I have put up a web site with tutorial materials for learning Voyant. See Dialogi.ca: Thinking-Through Voyant!.

Dialogica (http://dialogi.ca) plays with the idea of learning through a dialogue. A dialogue with the text; a dialogue mediated by the tool; and a dialogue with instructors like us.

Dialogica is made up of a set of tutorials that students should be able to alone or with minimal support. These are Word documents that you (instructors) can edit to suit your teaching and we are adding to them. We have added a gloss of teaching notes. Later we plan to add Spyral notebooks that go into greater detail on technical subjects, including how to program in Spyral.

Dialogica is made available with a CC BY 4.0 license so you can do what you want with it as long as you give us some sort of credit.

Econferences: why and how? A blog series

We are all having to learn how to do more remotely. This series of blog posts deals with the why, the what and the how of online conferences.

Open Book Publishers has just published a series of blog entries on Econferences: why and how? A blog series. This series adapts some of the interventions in a forthcoming collection I helped edit on Right Research: Modelling Sustainable Research Practices in the Anthropocene. We and OBP moved quickly when we realized that parts of our book would be useful in this time when all sorts of scholarly associations are having to move to online conferences (econferences.) We took two of the case studies and put preprints up for download:

I have also written a quick document on Organizing a Conference Online: A Quick Guide.

I hope these materials help and thank Chelsea Miya, Oliver Rossier and Open Book Publishers for moving so quickly to make these available.

(Against) Virus as Metaphor

Our fondness for viruses as metaphor may have kept us from insisting on and observing the standards and practices that would prevent their spread.

Paul Elie in the New Yorker has a comment (Against) Virus as Metaphor (March 19, 2020) where he argues that our habit of using viruses as a metaphor is dangerous. He draws on Susan Sontag’s Illness as Metaphor to discuss how using the virus as metaphor can end up both misleading us about what is happening on the internet with ideas and memes, but can also cast a moral shadow back onto those who have the real disease. It is tempting to blame those with diseases for moral faults that presumably made them more vulnerable to the disease. The truth is that diseases like viruses pay no attention to our morals. There is nothing socially constructed or deconstructed to the Coronavirus. It wasn’t invented by people but it has real consequences for people. We have to be careful not to ascribe human agency to it.

Continue reading (Against) Virus as Metaphor

Adventures in Science Fiction Cover Art: Disembodied Brains, Part I | Science Fiction and Other Suspect Ruminations

Gerard Quinn’s cover for the December 1956 issue of New Worlds

Thanks to Ali I cam across this compilation of Adventures in Science Fiction Cover Art: Disembodied Brains. Joachim Boaz has assembled a number of pulp sci-fi cover art showing giant brains. The giant brain was often the way computing was imagined. In fact early computers were called giant brains.

Disembodied brains — in large metal womb-like containers, floating in space or levitating in the air (you know, implying PSYCHIC POWER), pulsating in glass chambers, planets with brain-like undulations, pasted in the sky (GOD!, surprise) above the Garden of Eden replete with mechanical contrivances among the flowers and butterflies and naked people… The possibilities are endless, and more often than not, taken in rather absurd directions.

I wonder if we can plot some of the early beliefs about computers through these images and stories of giant brains. What did we think the brain/mind was such that a big one would have exaggerated powers? The equation would go something like this:

  • A brain is the seat of intelligence
  • The bigger the brain, the more intelligent
  • In big brains we might see emergent properties (like telepathy)
  • Scaling up the brain will give us artificially effective intelligence

This is what science fiction does so well – it takes some aspect of current science or culture and scales it up to imagine the consequences. Scaling brains, however, seems a bit literal, but the imagined futures are nonetheless important.

Covid-19 Notice on YouTube

COVID-19 Popup Notice on YouTube

When you go to YouTube now in Canada, a notice from the Public Health Agency of Canada pops up inviting you to Learn More from a reliable source. This strikes me a great way to encourage people to get their information from a reliable source rather than wallow in fake news online. This is particularly true of YouTube that is one of the facilitators of fake news.

More generally it shows an alternative way that social media platforms can fight fake news on key issues. They can work with governments to put appropriate information before people.

Further, the Learn More links to a government site with a wealth of information and links. Had it just been a short feel good message with a bit of advice, the site probably wouldn’t work to distract people towards reliable information. Instead the site has enough depth that one could spend a lot of time and get a satisfying picture. This is what one needs to fight fake news in a time of obsession – plenty of true news for the obsessed.

Here’s the File Clearview AI Has Been Keeping on Me, and Probably on You Too – VICE

We used the California Consumer Privacy Act to see what information the controversial facial recognition company has collected on me.

Anna Merlan has an important story on Vice, Here’s the File Clearview AI Has Been Keeping on Me, and Probably on You Too (Feb. 28, 2020). She used the California privacy laws to ask Clearview AI what information they kept on her and then to delete it. They asked her for a photo and proof of identification and eventually sent her a set of images and an index of where they came from. What is interesting is that they aren’t just scraping social media, they are scraping other scrapers like Insta Stalkers and various right wing sources that presumably have photos and stories about “dangerous intellectuals” like Merlan.

This bring back up the question of what is so unethical about face recognition and the storage of biometrics. We all have pictures of people in our photo collections, and Clearview AI was scraping public photos – is it then the use of the images that is the problem? Is it the recognition and search capability.

When Coronavirus Quarantine Is Class Warfare

A pandemic offers a great way to examine American class inequities.

There have been a couple of important stories about the quarantine as symbolic of our emerging class structure. The New York Times has an opinion by Charlie Warzen on When Coronavirus Quarantine Is Class Warfare(March 6th, 2020)

That pleasantness is heavily underwritten by a “vast digital underclass.” Many services that allow you to stay at home work only when others have to be out in the world on your behalf.

The quarantine shows how many services we have available for those who do intellectual work that can be done online. It is as if we were planning to be quarantined for years. The quarantine shows how one class can isolate themselves, but at the expense of a different class that handles all the inconveniences of material stuff and physical encounters of living. We have the permanent jobs with benefits. They deal with delivering food and trash. We can isolate ourselves from diseases, they have to risk disease to work. The gig economy has expanded the class of precarious workers that support the rest of us.

Continue reading When Coronavirus Quarantine Is Class Warfare

The Prodigal Techbro

The journey feels fake. These ‘I was lost but now I’m found, please come to my TED talk’ accounts typically miss most of the actual journey, yet claim the moral authority of one who’s ‘been there’ but came back. It’s a teleportation machine, but for ethics.

Source:

Maria Farrell, a technology policy critic, has written a nice essay on The Prodigal Techbro. She sympathizes with technology bros who have changed their mind, in the sense of wishing them well, but feels that they shouldn’t get so much attention. Instead we need to care for those who were critics from the beginning and who really need the attention and care. She maps this onto the parable of the Prodigal Son; why does the son who was lost get all the attention? She makes it an ethical issue, which is interesting, one I imagine fitting an ethics of care.

She ends the essay with this advice to techies who are changing their mind:

So, if you’re a prodigal tech bro, do us all a favour and, as Rebecca Solnit says, help “turn down the volume a little on the people who always got heard”:

  • Do the reading and do the work. Familiarize yourself with the research and what we’ve already tried, on your own time. Go join the digital rights and inequality-focused organizations that have been working to limit the harms of your previous employers and – this is key – sit quietly at the back and listen.
  • Use your privilege and status and the 80 percent of your network that’s still talking to you to big up activists who have been in the trenches for years already—especially women and people of colour. Say ‘thanks but no thanks’ to that invitation and pass it along to someone who’s done the work and paid the price.
  • Understand that if you are doing this for the next phase of your career, you are doing it wrong. If you are doing this to explain away the increasingly toxic names on your resumé, you are doing it wrong. If you are doing it because you want to ‘give back,’ you are doing it wrong.

Do this only because you recognize and can say out loud that you are not ‘giving back’, you are making amends for having already taken far, far too much.