Analyzing the Twitter Conversation Surrounding COVID-19
From Twitter I found out about this excellent visual essay on The Viral Virus by Kate Appel from May 6, 2020. Appel used Voyant to study highly retweeted tweets from January 20th to April 23rd. She divided the tweets into weeks and then used the distinctive words (tf-idf) tool to tell a story about the changing discussion about Covid-19. As you scroll down you see lists of distinctive words and supporting images. At the end she shows some of the topics gained from topic modelling. It is a remarkably simple, but effective use of Voyant.
Users of these apps should know that they are being traced through them, and
Users should consent to their use.
There are a variety of these apps from the system pioneered by Singapore called TraceTogether to its Alberta cousin ABTraceTogether. There are also a variety of approaches to tracing people from using credit card records to apps like TraceTogether. The EFF has a good essay on Protecting Civil Rights During a Public Health Crisis that I adapt here to provide guidelines for when one might gather data without knowledge or consent:
Medically necessary: There should be a clear and compelling explanation as to how this will save lives.
Personal information proportionate to need: The information gathered should fit the need and go no further.
Information handled by health informatics specialists: The gathering and processing should be handled by health informatics units, not signals intelligence or security services.
Deleted: It should be deleted once it is no longer needed.
Not be organized due to vulnerable demographics: The information should not be binned according to stereotypical or vulnerable demographics unless there is a compelling need. We should be very careful that we don’t use the data to further disadvantage groups.
Use reviewed afterwards: The should be a review after the crisis is over.
Transparency: Government should transparent about what they are gathering and why.
Due process: There should be open processes for people to challenge the gathering of their information or to challenge decisions taken as a result of such information.
But labor and robotics experts say social-distancing directives, which are likely to continue in some form after the crisis subsides, could prompt more industries to accelerate their use of automation. And long-simmering worries about job losses or a broad unease about having machines control vital aspects of daily life could dissipate as society sees the benefits of restructuring workplaces in ways that minimize close human contact.
We can imagine a dystopia where everything can run just fine with social (physical) distancing. Ultimately humans would only do the creative intellectual work as imagined in Forester’s The Machine Stops (from 1909!) We would entertain each other with solitary interventions, or at least works that can be made with the artists far apart. Perhaps green-screen technology and animation will let us even act alone and be composited together into virtual crowds.
In response to unprecedented exigencies, more systemic solutions may be necessary and fully justifiable under fair use and fair dealing. This includes variants of controlled digital lending (CDL), in which books are scanned and lent in digital form, preserving the same one-to-one scarcity and time limits that would apply to lending their physical copies. Even before the new coronavirus, a growing number of libraries have implemented CDL for select physical collections.
To be honest, I am so tired of sitting on my butt that I plan to spend much more time walking to and browsing around the library at the University of Alberta. As much as digital access is a convenience, I’m missing the occasions for getting outside and walking that a library affords. Perhaps we should think of the library as a labyrinth – something deliberately difficult to navigate in order to give you an excuse to walk around.
Perhaps I need a book scanner on a standing desk at home to keep me on my feet.
DER SPIEGEL: What are the lessons to be learned from this crisis
Dräger: It shows that common sense is more important than we all thought. This situation is so new and complicated that the problems can only be solved by people who carefully weigh their decisions. Artificial intelligence, which everyone has been talking so much about recently, isn’t much help at the moment.
There are so many lessons to be learned from the Coronavirus, but one lesson is that artificial intelligence isn’t always the solution. In a health crisis that has to do with viruses in the air, not information, AI is only indirectly useful. As the head of production of the German Drägerwerk ventilator manufacturer company puts it, the challenge of choosing who to sell ventilators to in this time is not one to handed over to an AI. Humans carefully weighing decisions (and taking responsibility for them) is what is needed in a crisis.
Our fondness for viruses as metaphor may have kept us from insisting on and observing the standards and practices that would prevent their spread.
Paul Elie in the New Yorker has a comment (Against) Virus as Metaphor (March 19, 2020) where he argues that our habit of using viruses as a metaphor is dangerous. He draws on Susan Sontag’s Illness as Metaphor to discuss how using the virus as metaphor can end up both misleading us about what is happening on the internet with ideas and memes, but can also cast a moral shadow back onto those who have the real disease. It is tempting to blame those with diseases for moral faults that presumably made them more vulnerable to the disease. The truth is that diseases like viruses pay no attention to our morals. There is nothing socially constructed or deconstructed to the Coronavirus. It wasn’t invented by people but it has real consequences for people. We have to be careful not to ascribe human agency to it.
Gerard Quinn’s cover for the December 1956 issue of New Worlds
Thanks to Ali I cam across this compilation of Adventures in Science Fiction Cover Art: Disembodied Brains. Joachim Boaz has assembled a number of pulp sci-fi cover art showing giant brains. The giant brain was often the way computing was imagined. In fact early computers were called giant brains.
Disembodied brains — in large metal womb-like containers, floating in space or levitating in the air (you know, implying PSYCHIC POWER), pulsating in glass chambers, planets with brain-like undulations, pasted in the sky (GOD!, surprise) above the Garden of Eden replete with mechanical contrivances among the flowers and butterflies and naked people… The possibilities are endless, and more often than not, taken in rather absurd directions.
I wonder if we can plot some of the early beliefs about computers through these images and stories of giant brains. What did we think the brain/mind was such that a big one would have exaggerated powers? The equation would go something like this:
A brain is the seat of intelligence
The bigger the brain, the more intelligent
In big brains we might see emergent properties (like telepathy)
Scaling up the brain will give us artificially effective intelligence
This is what science fiction does so well – it takes some aspect of current science or culture and scales it up to imagine the consequences. Scaling brains, however, seems a bit literal, but the imagined futures are nonetheless important.
We used the California Consumer Privacy Act to see what information the controversial facial recognition company has collected on me.
Anna Merlan has an important story on Vice, Here’s the File Clearview AI Has Been Keeping on Me, and Probably on You Too (Feb. 28, 2020). She used the California privacy laws to ask Clearview AI what information they kept on her and then to delete it. They asked her for a photo and proof of identification and eventually sent her a set of images and an index of where they came from. What is interesting is that they aren’t just scraping social media, they are scraping other scrapers like Insta Stalkers and various right wing sources that presumably have photos and stories about “dangerous intellectuals” like Merlan.
This bring back up the question of what is so unethical about face recognition and the storage of biometrics. We all have pictures of people in our photo collections, and Clearview AI was scraping public photos – is it then the use of the images that is the problem? Is it the recognition and search capability.
A pandemic offers a great way to examine American class inequities.
There have been a couple of important stories about the quarantine as symbolic of our emerging class structure. The New York Times has an opinion by Charlie Warzen on When Coronavirus Quarantine Is Class Warfare(March 6th, 2020).
That pleasantness is heavily underwritten by a “vast digital underclass.” Many services that allow you to stay at home work only when others have to be out in the world on your behalf.
The quarantine shows how many services we have available for those who do intellectual work that can be done online. It is as if we were planning to be quarantined for years. The quarantine shows how one class can isolate themselves, but at the expense of a different class that handles all the inconveniences of material stuff and physical encounters of living. We have the permanent jobs with benefits. They deal with delivering food and trash. We can isolate ourselves from diseases, they have to risk disease to work. The gig economy has expanded the class of precarious workers that support the rest of us.
The journey feels fake. These ‘I was lost but now I’m found, please come to my TED talk’ accounts typically miss most of the actual journey, yet claim the moral authority of one who’s ‘been there’ but came back. It’s a teleportation machine, but for ethics.
Maria Farrell, a technology policy critic, has written a nice essay on The Prodigal Techbro. She sympathizes with technology bros who have changed their mind, in the sense of wishing them well, but feels that they shouldn’t get so much attention. Instead we need to care for those who were critics from the beginning and who really need the attention and care. She maps this onto the parable of the Prodigal Son; why does the son who was lost get all the attention? She makes it an ethical issue, which is interesting, one I imagine fitting an ethics of care.
She ends the essay with this advice to techies who are changing their mind:
So, if you’re a prodigal tech bro, do us all a favour and, as Rebecca Solnit says, help “turn down the volume a little on the people who always got heard”:
Do the reading and do the work. Familiarize yourself with the research and what we’ve already tried, on your own time. Go join the digital rights and inequality-focused organizations that have been working to limit the harms of your previous employers and – this is key – sit quietly at the back and listen.
Use your privilege and status and the 80 percent of your network that’s still talking to you to big up activists who have been in the trenches for years already—especially women and people of colour. Say ‘thanks but no thanks’ to that invitation and pass it along to someone who’s done the work and paid the price.
Understand that if you are doing this for the next phase of your career, you are doing it wrong. If you are doing this to explain away the increasingly toxic names on your resumé, you are doing it wrong. If you are doing it because you want to ‘give back,’ you are doing it wrong.
Do this only because you recognize and can say out loud that you are not ‘giving back’, you are making amends for having already taken far, far too much.