Robots Welcome to Take Over, as Pandemic Accelerates Automation – The New York Times

But labor and robotics experts say social-distancing directives, which are likely to continue in some form after the crisis subsides, could prompt more industries to accelerate their use of automation. And long-simmering worries about job losses or a broad unease about having machines control vital aspects of daily life could dissipate as society sees the benefits of restructuring workplaces in ways that minimize close human contact.

The New York Times has a story pointing out that The Robots Welcome to Take Over, as Pandemic Accelerates Automation. While AI may not be that useful in making the crisis decisions, robots (and the AIs that drive them) can take over certain jobs that need doing, but which are dangerous to humans in a time of pandemic. Sorting trash is one example given. Cleaning spaces is another.

We can imagine a dystopia where everything can run just fine with social (physical) distancing. Ultimately humans would only do the creative intellectual work as imagined in Forester’s The Machine Stops (from 1909!) We would entertain each other with solitary interventions, or at least works that can be made with the artists far apart. Perhaps green-screen technology and animation will let us even act alone and be composited together into virtual crowds.

How useful is AI in a pandemic?

DER SPIEGEL: What are the lessons to be learned from this crisis

Dräger: It shows that common sense is more important than we all thought. This situation is so new and complicated that the problems can only be solved by people who carefully weigh their decisions. Artificial intelligence, which everyone has been talking so much about recently, isn’t much help at the moment.

Absolutely Mission Impossible: Interview with German Ventilator Manufacturer, Speigel International, Interviewed by Lukas Eberle und Martin U. Müller, March 27th, 2020.

There are so many lessons to be learned from the Coronavirus, but one lesson is that artificial intelligence isn’t always the solution. In a health crisis that has to do with viruses in the air, not information, AI is only indirectly useful. As the head of production of the German Drägerwerk ventilator manufacturer company puts it, the challenge of choosing who to sell ventilators to in this time is not one to handed over to an AI. Humans carefully weighing decisions (and taking responsibility for them) is what is needed in a crisis.

The Machine Stops

Imagine, if you can, a small room, hexagonal in shape, like the cell of a bee. It is lighted neither by window nor by lamp, yet it is filled with a soft radiance. There are no apertures for ventilation, yet the air is fresh. There are no musical instruments, and yet, at the moment that my meditation opens, this room is throbbing with melodious sounds. An armchair is in the centre, by its side a reading-desk — that is all the furniture. And in the armchair there sits a swaddled lump of flesh — a woman, about five feet high, with a face as white as a fungus. It is to her that the little room belongs.

Like many, I reread E.M. Forester’s The Machine Stops this week while in isolation. This short story was published in 1909 and written as a reaction to The Time Machine by H.G. Wells. (See the full text here (PDF).) In Forester it is the machine that keeps working the utopia of isolated pods; in Wells it is a caste of workers, the Morlochs, who also turn out to eat the leisure class.  Forester felt that technology was likely to be the problem, or part of the problem, not class.

In this pandemic we see a bit of both. Following Wells we see a class of gig-economy deliverers who facilitate the isolated life of those of us who do intellectual work. Intellectual work has gone virtual, but we still need a physical layer maintained. (Even the language of a stack of layers comes metaphorically from computing.) But we also see in our virtualized work a dependence on an information machine that lets our bodies sit on the couch in isolation while we listen to throbbing melodies. My body certainly feels like it is settling into a swaddled lump of fungus.

An intriguing aspect of “The Machine Stops” is how Vashti, the mother who loves the life of the machine, measures everything in terms of ideas. She complains that flying to see her son and seeing the earth below gives her no ideas. Ideas don’t come from original experiences but from layers of interpretation. Ideas are the currency of an intellectual life of leisure which loses touch with the “real world.”

At the end, as the machine stops and Kuno, Vashti’s son, comes to his mother in the disaster, they reflect on how a few homeless refugees living on the surface might survive and learn not to trust the machine.

“I have seen them, spoken to them, loved them. They are hiding in the mist and the ferns until our civilization stops. To-day they are the Homeless — to-morrow—”

“Oh, to-morrow — some fool will start the Machine again, to-morrow.”

“Never,” said Kuno, “never. Humanity has learnt its lesson.”

 

Adventures in Science Fiction Cover Art: Disembodied Brains, Part I | Science Fiction and Other Suspect Ruminations

Gerard Quinn’s cover for the December 1956 issue of New Worlds

Thanks to Ali I cam across this compilation of Adventures in Science Fiction Cover Art: Disembodied Brains. Joachim Boaz has assembled a number of pulp sci-fi cover art showing giant brains. The giant brain was often the way computing was imagined. In fact early computers were called giant brains.

Disembodied brains — in large metal womb-like containers, floating in space or levitating in the air (you know, implying PSYCHIC POWER), pulsating in glass chambers, planets with brain-like undulations, pasted in the sky (GOD!, surprise) above the Garden of Eden replete with mechanical contrivances among the flowers and butterflies and naked people… The possibilities are endless, and more often than not, taken in rather absurd directions.

I wonder if we can plot some of the early beliefs about computers through these images and stories of giant brains. What did we think the brain/mind was such that a big one would have exaggerated powers? The equation would go something like this:

  • A brain is the seat of intelligence
  • The bigger the brain, the more intelligent
  • In big brains we might see emergent properties (like telepathy)
  • Scaling up the brain will give us artificially effective intelligence

This is what science fiction does so well – it takes some aspect of current science or culture and scales it up to imagine the consequences. Scaling brains, however, seems a bit literal, but the imagined futures are nonetheless important.

Here’s the File Clearview AI Has Been Keeping on Me, and Probably on You Too – VICE

We used the California Consumer Privacy Act to see what information the controversial facial recognition company has collected on me.

Anna Merlan has an important story on Vice, Here’s the File Clearview AI Has Been Keeping on Me, and Probably on You Too (Feb. 28, 2020). She used the California privacy laws to ask Clearview AI what information they kept on her and then to delete it. They asked her for a photo and proof of identification and eventually sent her a set of images and an index of where they came from. What is interesting is that they aren’t just scraping social media, they are scraping other scrapers like Insta Stalkers and various right wing sources that presumably have photos and stories about “dangerous intellectuals” like Merlan.

This bring back up the question of what is so unethical about face recognition and the storage of biometrics. We all have pictures of people in our photo collections, and Clearview AI was scraping public photos – is it then the use of the images that is the problem? Is it the recognition and search capability.

When Coronavirus Quarantine Is Class Warfare

A pandemic offers a great way to examine American class inequities.

There have been a couple of important stories about the quarantine as symbolic of our emerging class structure. The New York Times has an opinion by Charlie Warzen on When Coronavirus Quarantine Is Class Warfare(March 6th, 2020)

That pleasantness is heavily underwritten by a “vast digital underclass.” Many services that allow you to stay at home work only when others have to be out in the world on your behalf.

The quarantine shows how many services we have available for those who do intellectual work that can be done online. It is as if we were planning to be quarantined for years. The quarantine shows how one class can isolate themselves, but at the expense of a different class that handles all the inconveniences of material stuff and physical encounters of living. We have the permanent jobs with benefits. They deal with delivering food and trash. We can isolate ourselves from diseases, they have to risk disease to work. The gig economy has expanded the class of precarious workers that support the rest of us.

Continue reading When Coronavirus Quarantine Is Class Warfare

Eyal Weizman: The algorithm is watching you

The London Review of Books has a blog entry by Eyal Weizman on how The algorithm is watching you (Feb. 19, 2020). Eyal Weizman, the founding director of Forensic Architecture, writes that he was denied entry into the USA because an algorithm had identified a security issue. He was going to the US for a show in Miami titled True to Scale.

Setting aside the issue of how the US government seems to now be denying entry to people who do inconvenient investigations, something a country that prides itself on human rights shouldn’t do, the use of an algorithm as a reason is disturbing for a number of reasons:

  • As Weizman tells the story, the embassy officer couldn’t tell what triggered the algorithm. That would seem to violate important principles in the use of AIs; namely that an AI used in making decisions should be transparent and able to explain why it made the decision. Perhaps the agency involved doesn’t want to reveal the nasty logic behind their algorithms.
  • Further, there is no recourse, another violation of principle for AIs, namely that they should be accountable and there should be mechanisms to challenge a decision.
  • The officer then asked Weizman to provide them with more information, like his travel for the last 15 years and contacts, which he understandably declined to do. In effect the system was asking him to surveil himself and share that with a foreign government. Are we going to be put in the situation where we have to surrender privacy in order to get access to government services? We do that already for commercial services.
  • As Weizman points out, this shows the “arbitrary logic of the border” that is imposed on migrants. Borders have become grey zones where the laws inside a country don’t apply and the worst of a nation’s phobias are manifest.

Facebook to Pay $550 Million to Settle Facial Recognition Suit

It was another black mark on the privacy record of the social network, which also reported its quarterly earnings.

The New York Times has a story on how Facebook to Pay $550 Million to Settle Facial Recognition Suit (Natasha Singer and Mike Isaac, Jan. 29, 2020.) The Illinois case has to do with Facebook’s face recognition technology that was part of Tag Suggestions that would suggest names for people in photos. Apparently in Illinois it is illegal to harvest biometric data without consent. The Biometric Information Privacy Act (BIPA) passed in 2008 “guards against the unlawful collection and storing of biometric information.” (Wikipedia entry)

BIPA suggests a possible answer to the question of what is unethical about face recognition. While I realize that a law is not ethics (and vice versa) BIPA hints at one of the ways we can try to unpack the ethics of face recognition. The position suggested by BIPA would go something like this:

  • Face recognition is dependent on biometric data which is extracted from an image or in other form of scan.
  • To collect and store biometric data one needs the consent of the person whose data is collected.
  • The data has to be securely stored.
  • The data has to be destroyed in a timely manner.
  • If there is consent, secure storage, and timely deletion of the data, then the system/service can be said to not be unethical.

There are a number of interesting points to be made about this position. First, it is not the gathering, storing and providing access to images of people that is at issue. Face recognition is an ethical issue because biometric data about a person is being extracted, stored and used. Thus Google Image Search is not an issue as they are storing data about whole images while FB stores information about the face of individual people (along with associated information.)

This raises issues about the nature of biometric data. What is the line between a portrait (image) and biometric information? Would gathering biographical data about a person become biometric at some point if it contained descriptions of their person?

Second, my reading is that a service like Clearview AI could also be sued if they scrape images of people in Illinois and extract biometric data. This could provide an answer to the question of what is ethically wrong about the Clearview AI service. (See my previous blog entry on this.)

Third, I think there is a missing further condition that should be specified, names that the company gathering the biometric data should identify the purpose for which they are gathering it when seeking consent and limit their use of the data to the identified uses. When they no longer need the data for the identified use, they should destroy it. This is essentially part of the PIPA principle of Limiting Use, Disclosure and Retention. It is assumed that if one is to delete data in a timely fashion there will be some usage criteria that determine timeliness, but that isn’t always the case. Sometimes it is just the passage of time.

Of course, the value of data mining is often in the unanticipated uses of data like biometric data. Unanticipated uses are, by definition, not uses that were identified when seeking consent, unless the consent was so broad as to be meaningless.

No doubt more issues will occur to me.

The Secretive Company That Might End Privacy as We Know It

“I’ve come to the conclusion that because information constantly increases, there’s never going to be privacy,” Mr. Scalzo said. “Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.”

The New York Times has an important story about Clearview AI, The Secretive Company That Might End Privacy as We Know It. Clearview, which is partly funded by Peter Thiel, scraped a number of social media sites for pictures of people and has developed an AI application that you can upload a picture to and it tries to recognize the person and show you their social media trail. They are then selling the service to police forces.

Needless to say, this is a disturbing use of face recognition for surveillance using our own social media. They are using public images that anyone of us could look at, but at a scale no person could handle. They are doing something that would almost be impossible to stop, even with legislation. What’s to stop the intelligence services of another country doing this (and more)? Perhaps privacy is no longer possible.

Continue reading The Secretive Company That Might End Privacy as We Know It

In 2020, let’s stop AI ethics-washing and actually do something – MIT Technology Review

But talk is just that—it’s not enough. For all the lip service paid to these issues, many organizations’ AI ethics guidelines remain vague and hard to implement.

Thanks to Oliver I came across this call for an end to ethics-washing by artificial intelligence reporter Karen Hao in the MIT Technology Review, In 2020, let’s stop AI ethics-washing and actually do something The call echoes something I’ve been talking about – that we need to move beyond guidelines, lists of principles, and checklists.  She nicely talks about some of the initiatives to hold AI accountable that are taking place and what should happen. Read on if you want to see what I think we need.

Continue reading In 2020, let’s stop AI ethics-washing and actually do something – MIT Technology Review