Is this crisis a turning point?

The era of peak globalisation is over. For those of us not on the front line, clearing the mind and thinking how to live in an altered world is the task at hand.

John Gray has written an essay in the New Statesman on Why this crisis is a turning point in history. He argues that the era of hyperglobalism is at an end and many systems may not survive the shift to something different. Many may think we will, after a bit of isolated pain, return to the good old expanding wealth, but the economic crisis that is now emerging may break that dream. Governments and nations may be broken by collapsing systems.

The tech ‘solutions’ for coronavirus take the surveillance state to the next level

Neoliberalism shrinks public budgets; solutionism shrinks public imagination.

Evgeny Morozov has crisp essay in The Guardina on how The tech ‘solutions’ for coronavirus take the surveillance state to the next level. He argues that neoliberalist austerity cut our public services back in ways that now we see are endangering lives, but it is solutionism that constraining our ideas about what we can do to deal with situations. If we look for a technical solution we give up on questioning the underlying defunding of the commons.

There is nice interview between Natasha Dow Shüll Morozov on The Folly of Technological Solutionism: An Interview with Evgeny Morozov in which they talk about his book To Save Everything, Click Here: The Folly of Technological Solutionism and gamification.

Back in The Guardian, he ends his essay warning that we should focus on picking between apps – between solutions. We should get beyond solutions like apps to thinking politically.

The feast of solutionism unleashed by Covid-19 reveals the extreme dependence of the actually existing democracies on the undemocratic exercise of private power by technology platforms. Our first order of business should be to chart a post-solutionist path – one that gives the public sovereignty over digital platforms.

Facebook to Pay $550 Million to Settle Facial Recognition Suit

It was another black mark on the privacy record of the social network, which also reported its quarterly earnings.

The New York Times has a story on how Facebook to Pay $550 Million to Settle Facial Recognition Suit (Natasha Singer and Mike Isaac, Jan. 29, 2020.) The Illinois case has to do with Facebook’s face recognition technology that was part of Tag Suggestions that would suggest names for people in photos. Apparently in Illinois it is illegal to harvest biometric data without consent. The Biometric Information Privacy Act (BIPA) passed in 2008 “guards against the unlawful collection and storing of biometric information.” (Wikipedia entry)

BIPA suggests a possible answer to the question of what is unethical about face recognition. While I realize that a law is not ethics (and vice versa) BIPA hints at one of the ways we can try to unpack the ethics of face recognition. The position suggested by BIPA would go something like this:

  • Face recognition is dependent on biometric data which is extracted from an image or in other form of scan.
  • To collect and store biometric data one needs the consent of the person whose data is collected.
  • The data has to be securely stored.
  • The data has to be destroyed in a timely manner.
  • If there is consent, secure storage, and timely deletion of the data, then the system/service can be said to not be unethical.

There are a number of interesting points to be made about this position. First, it is not the gathering, storing and providing access to images of people that is at issue. Face recognition is an ethical issue because biometric data about a person is being extracted, stored and used. Thus Google Image Search is not an issue as they are storing data about whole images while FB stores information about the face of individual people (along with associated information.)

This raises issues about the nature of biometric data. What is the line between a portrait (image) and biometric information? Would gathering biographical data about a person become biometric at some point if it contained descriptions of their person?

Second, my reading is that a service like Clearview AI could also be sued if they scrape images of people in Illinois and extract biometric data. This could provide an answer to the question of what is ethically wrong about the Clearview AI service. (See my previous blog entry on this.)

Third, I think there is a missing further condition that should be specified, names that the company gathering the biometric data should identify the purpose for which they are gathering it when seeking consent and limit their use of the data to the identified uses. When they no longer need the data for the identified use, they should destroy it. This is essentially part of the PIPA principle of Limiting Use, Disclosure and Retention. It is assumed that if one is to delete data in a timely fashion there will be some usage criteria that determine timeliness, but that isn’t always the case. Sometimes it is just the passage of time.

Of course, the value of data mining is often in the unanticipated uses of data like biometric data. Unanticipated uses are, by definition, not uses that were identified when seeking consent, unless the consent was so broad as to be meaningless.

No doubt more issues will occur to me.

Avast closes Jumpshot over data privacy backlash, but transparency is the real issue

Avast will shutter its Jumpshot subsidiary just days after an exposé targeted the way it sold user data. But transparency remains the bigger issue.

From Venturbeat (via Slashdot) the news that antivirus company Avast closes Jumpshot over data privacy backlash, but transparency is the real issue (Paul Sawers, Jan. 30, 2020). Avast had been found to have been gathering detailed data about users of its antivirus tools and then selling anonymized data through Jumpshot. The data was of sufficient detail (apparently down to an “all clicks feed”) that it would probably be possible to deanonymize data. So what was the ethical problem here?

As the title of the story advertises the issue was not that Avast was concealing what it was doing, it is more a matter of how transparent it was about what it was doing. The data collection was “opt out” and so you had to go find the setting rather than being asked if you wanted to “opt in.” Jumpstart was apparently fairly open about their business. The information the provided to help you make a decision was not particularly deceptive (see image below), but it is typical of software to downplay the identifiability of data collected.

Some of the issue is around consent. What realistically constitutes consent these days? Does one need to opt-in for there to be meaningful consent? Does one need sufficient information to make a decision, and if so, what would that be?

In 2020, let’s stop AI ethics-washing and actually do something – MIT Technology Review

But talk is just that—it’s not enough. For all the lip service paid to these issues, many organizations’ AI ethics guidelines remain vague and hard to implement.

Thanks to Oliver I came across this call for an end to ethics-washing by artificial intelligence reporter Karen Hao in the MIT Technology Review, In 2020, let’s stop AI ethics-washing and actually do something The call echoes something I’ve been talking about – that we need to move beyond guidelines, lists of principles, and checklists.  She nicely talks about some of the initiatives to hold AI accountable that are taking place and what should happen. Read on if you want to see what I think we need.

Continue reading In 2020, let’s stop AI ethics-washing and actually do something – MIT Technology Review

From Facebook: An Update on Building a Global Oversight Board

We’re sharing the progress we’ve made in building a new organization with independent oversight over how Facebook makes decisions on content.

Brent Harris, Director of Governance and Global Affairs at Facebook has an interesting blog post that provides An Update on Building a Global Oversight Board (Dec. 12, 2019). Facebook is developing an independent Global Oversight Board which will be able to make decisions about content on Facebook.

I can’t help feeling that Facebook is still trying to avoid being a content company. Instead of admitting that parts of what they do matches what media content companies do, they want to stick to a naive, but convenient, view that Facebook is a technological facilitator and content comes from somewhere else. This, like the view that bias in AI is always in the data and not in the algorithms, allows the company to continue with the pretence that they are about clean technology and algorithms. All the old human forms of judgement will be handled by an independent GOB so Facebook doesn’t have to admit they might have a position on something.

What Facebook should do is admit that they are a media company and that they make decisions that influence what users see (or not.) They should do what newspapers do – embrace the editorial function as part of what it means to deal in content. There is still, in newspapers, an affectation to the separation between opinionated editorial and objective reporting functions, but it is one that is open for discussion. What Facebook is doing is a not-taking-responsibility, but sequestering of responsibility. This will allow Facebook to play innocent as wave after wave of fake news stories sweep through their system.

Still, it is an interesting response by a company that obviously wants to deal in news for the economic value, but doesn’t want to corrupted by it.

Bird Scooter Charging Is ‘One Level Up From Collecting Cans’–But These Entrepreneurs Are Making a Lucrative Business of It

Scooters have come to Edmonton. Both Bird and Lime dumped hundreds of scooters in my neighbourhood just before the Fringe festival. Users are supposed to use bike lanes and shared-use paths, but of course they tend to use sidewalks. Fortunately most people using them seem to tying them for a lark rather than seriously trying to get somewhere.

I can’t help thinking this business is a bit like the Segway (a company apparently making money now selling the scooters) – a great concept that appeals to venture capital, but not something that will work economically. For example, what will happen in the winter? Will the companies leave them around in the snow or pack them up for the season?

The economic model of these companies is also interesting. They seem to have minimal staff in each city. They pay chargers a to find the scooters and charge them each night. More gig-economy work that may not provide a living! See  Bird Scooter Charging Is ‘One Level Up From Collecting Cans’–But These Entrepreneurs Are Making a Lucrative Business of It.

At the end of the day, does anyone make enough to make this viable? One wonders if the scooter companies are selling the data they gather?

Facebook refused to delete an altered video of Nancy Pelosi. Would the same rule apply to Mark Zuckerberg?

‘Imagine this for a second…’ (2019) from Bill Posters on Vimeo.

A ‘deepfake’ of Zuckerberg was uploaded to Instagram and appears to show him delivering an ominous message

The issue of “deepfakes” is big on the internet after someone posted a slowed down video of Nancy Pelosi to make her look drunk and then, after Facebook didn’t take it down a group posted a fake Zuckerberg video. See  Facebook refused to delete an altered video of Nancy Pelosi. Would the same rule apply to Mark Zuckerberg? This video was created by artists Posters and Howe and is part of a series

While the Pelosi video was a crude hack, the Zuckerberg video used AI technology from Canny AI, a company that has developed tools for replacing dialogue in video (which has legitimate uses in localization of educational content, for example.) The artists provided a voice actor with a script and then the AI trained on existing video of Zuckerberg and that of the voice actor to morph Zuckerberg’s facial movements to match the actor’s.

What is interesting is that the Zuckerberg video is part of an installation called Spectre with a number of deliberate fakes that were exhibited at  a venue associated with the Sheffield Doc|Fest. Spectre, as the name suggests, both suggests how our data can be used to create ghost media of us, but also reminds us playfully of that fictional criminal organization that haunted James Bond. We are now being warned that real, but spectral organizations could haunt our democracy, messing with elections anonymously.

Schools Are Deploying Massive Digital Surveillance Systems. The Results Are Alarming

“every move you make…, every word you say, every game you play…, I’ll be watching you.” (The Police – Every Breath You Take)

Education Week has an alarming story about how schools are using surveillance, Schools Are Deploying Massive Digital Surveillance Systems. The Results Are Alarming. The story is by Benjamin Harold and dates from May 30, 2019. It talks not only about the deployment of cameras, but the use of companies like Social Sentinel, Securly, and Gaggle that monitor social media or school computers.

Every day, Gaggle monitors the digital content created by nearly 5 million U.S. K-12 students. That includes all their files, messages, and class assignments created and stored using school-issued devices and accounts.

The company’s machine-learning algorithms automatically scan all that information, looking for keywords and other clues that might indicate something bad is about to happen. Human employees at Gaggle review the most serious alerts before deciding whether to notify school district officials responsible for some combination of safety, technology, and student services. Typically, those administrators then decide on a case-by-case basis whether to inform principals or other building-level staff members.

The story provides details that run from the serious to the absurd. It mentions concerns by the ACLU that such surveillance can desensitize children to surveillance and make it normal. The ACLU story makes a connection with laws that forbid federal agencies from studying or sharing data that could make the case for gun control. This creates a situation where the obvious ways to stop gun violence in schools aren’t studied so surveillance companies step in with solutions.

Needless to say, surveillance has its own potential harms beyond desensitization. The ACLU story lists the following potential harms:

  • Suppression of students’ intellectual freedom, because students will not want to investigate unpopular or verboten subjects if the focus of their research might be revealed.
  • Suppression of students’ freedom of speech, because students will not feel at ease engaging in private conversations they do not want revealed to the world at large.
  • Suppression of students’ freedom of association, because surveillance can reveal a students’ social contacts and the groups a student engages with, including groups a student might wish to keep private, like LGBTQ organizations or those promoting locally unpopular political views or candidates.
  • Undermining students’ expectation of privacy, which occurs when they know their movements, communications, and associations are being watched and scrutinized.
  • False identification of students as safety threats, which exposes them to a range of physical, emotional, and psychological harms.

As with the massive investment in surveillance for national security and counter terrorism purposes, we need to ask whether the cost of these systems, both financial and other, is worth it. Unfortunately, protecting children, like protecting from terrorism is hard to put a price on which makes it hard to argue against such investments.

Amazon’s Home Surveillance Company Is Putting Suspected Petty Thieves in its Advertisements

Ring, Amazon’s doorbell company, posted a video of a woman suspected of a crime and asked users to call the cops with information.

VICE has a story about how Amazon’s Home Surveillance Company Is Putting Suspected Petty Thieves in its Advertisements. The story is that Ring took out an ad which showed suspicious behaviour. A woman who is presumably innocent until proven guilty is shown clearly in order to sell more alarm systems. This information came from the police.

Needless to say, it raises ethical issues around community policing. Ring has a “Neighbors” app that lets vigilantes report suspicious behaviour creating a form of digital neighbourhood watch. The article references a Motherboard article that suggests that such digital neighbourhood surveillance can lead to racism.

Beyond creating a “new neighborhood watch,” Amazon and Ring are normalizing the use of video surveillance and pitting neighbors against each other. Chris Gilliard, a professor of English at Macomb Community College who studies institutional tech policy, told Motherboard in a phone call that such a “crime and safety” focused platforms can actively reinforce racism.

All we need now is for there to be AI in the mix. Face recognition so you can identify anyone walking past your door.