When Coronavirus Quarantine Is Class Warfare

A pandemic offers a great way to examine American class inequities.

There have been a couple of important stories about the quarantine as symbolic of our emerging class structure. The New York Times has an opinion by Charlie Warzen on When Coronavirus Quarantine Is Class Warfare(March 6th, 2020)

That pleasantness is heavily underwritten by a “vast digital underclass.” Many services that allow you to stay at home work only when others have to be out in the world on your behalf.

The quarantine shows how many services we have available for those who do intellectual work that can be done online. It is as if we were planning to be quarantined for years. The quarantine shows how one class can isolate themselves, but at the expense of a different class that handles all the inconveniences of material stuff and physical encounters of living. We have the permanent jobs with benefits. They deal with delivering food and trash. We can isolate ourselves from diseases, they have to risk disease to work. The gig economy has expanded the class of precarious workers that support the rest of us.

Continue reading When Coronavirus Quarantine Is Class Warfare

The Prodigal Techbro

The journey feels fake. These ‘I was lost but now I’m found, please come to my TED talk’ accounts typically miss most of the actual journey, yet claim the moral authority of one who’s ‘been there’ but came back. It’s a teleportation machine, but for ethics.

Source:

Maria Farrell, a technology policy critic, has written a nice essay on The Prodigal Techbro. She sympathizes with technology bros who have changed their mind, in the sense of wishing them well, but feels that they shouldn’t get so much attention. Instead we need to care for those who were critics from the beginning and who really need the attention and care. She maps this onto the parable of the Prodigal Son; why does the son who was lost get all the attention? She makes it an ethical issue, which is interesting, one I imagine fitting an ethics of care.

She ends the essay with this advice to techies who are changing their mind:

So, if you’re a prodigal tech bro, do us all a favour and, as Rebecca Solnit says, help “turn down the volume a little on the people who always got heard”:

  • Do the reading and do the work. Familiarize yourself with the research and what we’ve already tried, on your own time. Go join the digital rights and inequality-focused organizations that have been working to limit the harms of your previous employers and – this is key – sit quietly at the back and listen.
  • Use your privilege and status and the 80 percent of your network that’s still talking to you to big up activists who have been in the trenches for years already—especially women and people of colour. Say ‘thanks but no thanks’ to that invitation and pass it along to someone who’s done the work and paid the price.
  • Understand that if you are doing this for the next phase of your career, you are doing it wrong. If you are doing this to explain away the increasingly toxic names on your resumé, you are doing it wrong. If you are doing it because you want to ‘give back,’ you are doing it wrong.

Do this only because you recognize and can say out loud that you are not ‘giving back’, you are making amends for having already taken far, far too much.

It’s the (Democracy-Poisoning) Golden Age of Free Speech

And sure, it is a golden age of free speech—if you can believe your lying eyes. Is that footage you’re watching real? Was it really filmed where and when it says it was? Is it being shared by alt-right trolls or a swarm of Russian bots? Was it maybe even generated with the help of artificial intelligence?

There have been a number of stories bemoaning what has become of free speech. Fore example, WIRED has one title, It’s the (Democracy-Poisoning) Golden Age of Free Speech by Zeynep Tufekci (Jan. 16, 2020). In it she argues that access to an audience for your speech is no longer a matter of getting into centralized media, it is now a matter of getting attention. The world’s attention is managed by a very small number of platforms (Facebook, Google and Twitter) using algorithms that maximize their profits by keeping us engaged so they can sell our attention for targeted ads.

Continue reading It’s the (Democracy-Poisoning) Golden Age of Free Speech

Eyal Weizman: The algorithm is watching you

The London Review of Books has a blog entry by Eyal Weizman on how The algorithm is watching you (Feb. 19, 2020). Eyal Weizman, the founding director of Forensic Architecture, writes that he was denied entry into the USA because an algorithm had identified a security issue. He was going to the US for a show in Miami titled True to Scale.

Setting aside the issue of how the US government seems to now be denying entry to people who do inconvenient investigations, something a country that prides itself on human rights shouldn’t do, the use of an algorithm as a reason is disturbing for a number of reasons:

  • As Weizman tells the story, the embassy officer couldn’t tell what triggered the algorithm. That would seem to violate important principles in the use of AIs; namely that an AI used in making decisions should be transparent and able to explain why it made the decision. Perhaps the agency involved doesn’t want to reveal the nasty logic behind their algorithms.
  • Further, there is no recourse, another violation of principle for AIs, namely that they should be accountable and there should be mechanisms to challenge a decision.
  • The officer then asked Weizman to provide them with more information, like his travel for the last 15 years and contacts, which he understandably declined to do. In effect the system was asking him to surveil himself and share that with a foreign government. Are we going to be put in the situation where we have to surrender privacy in order to get access to government services? We do that already for commercial services.
  • As Weizman points out, this shows the “arbitrary logic of the border” that is imposed on migrants. Borders have become grey zones where the laws inside a country don’t apply and the worst of a nation’s phobias are manifest.

Facebook to Pay $550 Million to Settle Facial Recognition Suit

It was another black mark on the privacy record of the social network, which also reported its quarterly earnings.

The New York Times has a story on how Facebook to Pay $550 Million to Settle Facial Recognition Suit (Natasha Singer and Mike Isaac, Jan. 29, 2020.) The Illinois case has to do with Facebook’s face recognition technology that was part of Tag Suggestions that would suggest names for people in photos. Apparently in Illinois it is illegal to harvest biometric data without consent. The Biometric Information Privacy Act (BIPA) passed in 2008 “guards against the unlawful collection and storing of biometric information.” (Wikipedia entry)

BIPA suggests a possible answer to the question of what is unethical about face recognition. While I realize that a law is not ethics (and vice versa) BIPA hints at one of the ways we can try to unpack the ethics of face recognition. The position suggested by BIPA would go something like this:

  • Face recognition is dependent on biometric data which is extracted from an image or in other form of scan.
  • To collect and store biometric data one needs the consent of the person whose data is collected.
  • The data has to be securely stored.
  • The data has to be destroyed in a timely manner.
  • If there is consent, secure storage, and timely deletion of the data, then the system/service can be said to not be unethical.

There are a number of interesting points to be made about this position. First, it is not the gathering, storing and providing access to images of people that is at issue. Face recognition is an ethical issue because biometric data about a person is being extracted, stored and used. Thus Google Image Search is not an issue as they are storing data about whole images while FB stores information about the face of individual people (along with associated information.)

This raises issues about the nature of biometric data. What is the line between a portrait (image) and biometric information? Would gathering biographical data about a person become biometric at some point if it contained descriptions of their person?

Second, my reading is that a service like Clearview AI could also be sued if they scrape images of people in Illinois and extract biometric data. This could provide an answer to the question of what is ethically wrong about the Clearview AI service. (See my previous blog entry on this.)

Third, I think there is a missing further condition that should be specified, names that the company gathering the biometric data should identify the purpose for which they are gathering it when seeking consent and limit their use of the data to the identified uses. When they no longer need the data for the identified use, they should destroy it. This is essentially part of the PIPA principle of Limiting Use, Disclosure and Retention. It is assumed that if one is to delete data in a timely fashion there will be some usage criteria that determine timeliness, but that isn’t always the case. Sometimes it is just the passage of time.

Of course, the value of data mining is often in the unanticipated uses of data like biometric data. Unanticipated uses are, by definition, not uses that were identified when seeking consent, unless the consent was so broad as to be meaningless.

No doubt more issues will occur to me.

Avast closes Jumpshot over data privacy backlash, but transparency is the real issue

Avast will shutter its Jumpshot subsidiary just days after an exposé targeted the way it sold user data. But transparency remains the bigger issue.

From Venturbeat (via Slashdot) the news that antivirus company Avast closes Jumpshot over data privacy backlash, but transparency is the real issue (Paul Sawers, Jan. 30, 2020). Avast had been found to have been gathering detailed data about users of its antivirus tools and then selling anonymized data through Jumpshot. The data was of sufficient detail (apparently down to an “all clicks feed”) that it would probably be possible to deanonymize data. So what was the ethical problem here?

As the title of the story advertises the issue was not that Avast was concealing what it was doing, it is more a matter of how transparent it was about what it was doing. The data collection was “opt out” and so you had to go find the setting rather than being asked if you wanted to “opt in.” Jumpstart was apparently fairly open about their business. The information the provided to help you make a decision was not particularly deceptive (see image below), but it is typical of software to downplay the identifiability of data collected.

Some of the issue is around consent. What realistically constitutes consent these days? Does one need to opt-in for there to be meaningful consent? Does one need sufficient information to make a decision, and if so, what would that be?

The Secretive Company That Might End Privacy as We Know It

“I’ve come to the conclusion that because information constantly increases, there’s never going to be privacy,” Mr. Scalzo said. “Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.”

The New York Times has an important story about Clearview AI, The Secretive Company That Might End Privacy as We Know It. Clearview, which is partly funded by Peter Thiel, scraped a number of social media sites for pictures of people and has developed an AI application that you can upload a picture to and it tries to recognize the person and show you their social media trail. They are then selling the service to police forces.

Needless to say, this is a disturbing use of face recognition for surveillance using our own social media. They are using public images that anyone of us could look at, but at a scale no person could handle. They are doing something that would almost be impossible to stop, even with legislation. What’s to stop the intelligence services of another country doing this (and more)? Perhaps privacy is no longer possible.

Continue reading The Secretive Company That Might End Privacy as We Know It

There are 2,373 squirrels in Central Park. I know because I helped count them

I volunteered for the first squirrel census in the city. Here’s what I learned, in a nutshell.

From Lauren Klein on Twitter I learned about a great New York Times article on  There are 2,373 squirrels in Central Park. I know because I helped count them. The article is by Denise Lau (Jan. 8, 2020.) As Klein points out, it is about the messiness of data collection. (Note that she has a book coming out on Data Feminism with Catherine D’Ignazio.)

In 2020, let’s stop AI ethics-washing and actually do something – MIT Technology Review

But talk is just that—it’s not enough. For all the lip service paid to these issues, many organizations’ AI ethics guidelines remain vague and hard to implement.

Thanks to Oliver I came across this call for an end to ethics-washing by artificial intelligence reporter Karen Hao in the MIT Technology Review, In 2020, let’s stop AI ethics-washing and actually do something The call echoes something I’ve been talking about – that we need to move beyond guidelines, lists of principles, and checklists.  She nicely talks about some of the initiatives to hold AI accountable that are taking place and what should happen. Read on if you want to see what I think we need.

Continue reading In 2020, let’s stop AI ethics-washing and actually do something – MIT Technology Review

The 100 Worst Ed-Tech Debacles of the Decade

With the end of the year there are some great articles showing up reflecting on debacles of the decade. One of my favorites is The 100 Worst Ed-Tech Debacles of the DecadeEd-Tech is one of those fields where over and over techies think they know better. Some of the debacles Watters discusses:

  • 3D Printing
  • The “Flipped Classroom” (Full disclosure: I sat on a committee that funded these.)
  • Op-Eds to ban laptops
  • Clickers
  • Stories about the end of the library
  • Interactive whiteboards
  • The K-12 Cyber Incident Map (Check it out here)
  • IBM Watson
  • The Year of the MOOC

This collection of 100 terrible ideas in instructional technology should be mandatory reading for all of us who have been keen on ed-tech. (And I am one who has develop ed-tech and oversold it.) Each item is a mini essay with links worth following.