The tech ‘solutions’ for coronavirus take the surveillance state to the next level

Neoliberalism shrinks public budgets; solutionism shrinks public imagination.

Evgeny Morozov has crisp essay in The Guardina on how The tech ‘solutions’ for coronavirus take the surveillance state to the next level. He argues that neoliberalist austerity cut our public services back in ways that now we see are endangering lives, but it is solutionism that constraining our ideas about what we can do to deal with situations. If we look for a technical solution we give up on questioning the underlying defunding of the commons.

There is nice interview between Natasha Dow Shüll Morozov on The Folly of Technological Solutionism: An Interview with Evgeny Morozov in which they talk about his book To Save Everything, Click Here: The Folly of Technological Solutionism and gamification.

Back in The Guardian, he ends his essay warning that we should focus on picking between apps – between solutions. We should get beyond solutions like apps to thinking politically.

The feast of solutionism unleashed by Covid-19 reveals the extreme dependence of the actually existing democracies on the undemocratic exercise of private power by technology platforms. Our first order of business should be to chart a post-solutionist path – one that gives the public sovereignty over digital platforms.

Covid-19 Notice on YouTube

COVID-19 Popup Notice on YouTube

When you go to YouTube now in Canada, a notice from the Public Health Agency of Canada pops up inviting you to Learn More from a reliable source. This strikes me a great way to encourage people to get their information from a reliable source rather than wallow in fake news online. This is particularly true of YouTube that is one of the facilitators of fake news.

More generally it shows an alternative way that social media platforms can fight fake news on key issues. They can work with governments to put appropriate information before people.

Further, the Learn More links to a government site with a wealth of information and links. Had it just been a short feel good message with a bit of advice, the site probably wouldn’t work to distract people towards reliable information. Instead the site has enough depth that one could spend a lot of time and get a satisfying picture. This is what one needs to fight fake news in a time of obsession – plenty of true news for the obsessed.

Here’s the File Clearview AI Has Been Keeping on Me, and Probably on You Too – VICE

We used the California Consumer Privacy Act to see what information the controversial facial recognition company has collected on me.

Anna Merlan has an important story on Vice, Here’s the File Clearview AI Has Been Keeping on Me, and Probably on You Too (Feb. 28, 2020). She used the California privacy laws to ask Clearview AI what information they kept on her and then to delete it. They asked her for a photo and proof of identification and eventually sent her a set of images and an index of where they came from. What is interesting is that they aren’t just scraping social media, they are scraping other scrapers like Insta Stalkers and various right wing sources that presumably have photos and stories about “dangerous intellectuals” like Merlan.

This bring back up the question of what is so unethical about face recognition and the storage of biometrics. We all have pictures of people in our photo collections, and Clearview AI was scraping public photos – is it then the use of the images that is the problem? Is it the recognition and search capability.

It’s the (Democracy-Poisoning) Golden Age of Free Speech

And sure, it is a golden age of free speech—if you can believe your lying eyes. Is that footage you’re watching real? Was it really filmed where and when it says it was? Is it being shared by alt-right trolls or a swarm of Russian bots? Was it maybe even generated with the help of artificial intelligence?

There have been a number of stories bemoaning what has become of free speech. Fore example, WIRED has one title, It’s the (Democracy-Poisoning) Golden Age of Free Speech by Zeynep Tufekci (Jan. 16, 2020). In it she argues that access to an audience for your speech is no longer a matter of getting into centralized media, it is now a matter of getting attention. The world’s attention is managed by a very small number of platforms (Facebook, Google and Twitter) using algorithms that maximize their profits by keeping us engaged so they can sell our attention for targeted ads.

Continue reading It’s the (Democracy-Poisoning) Golden Age of Free Speech

Facebook to Pay $550 Million to Settle Facial Recognition Suit

It was another black mark on the privacy record of the social network, which also reported its quarterly earnings.

The New York Times has a story on how Facebook to Pay $550 Million to Settle Facial Recognition Suit (Natasha Singer and Mike Isaac, Jan. 29, 2020.) The Illinois case has to do with Facebook’s face recognition technology that was part of Tag Suggestions that would suggest names for people in photos. Apparently in Illinois it is illegal to harvest biometric data without consent. The Biometric Information Privacy Act (BIPA) passed in 2008 “guards against the unlawful collection and storing of biometric information.” (Wikipedia entry)

BIPA suggests a possible answer to the question of what is unethical about face recognition. While I realize that a law is not ethics (and vice versa) BIPA hints at one of the ways we can try to unpack the ethics of face recognition. The position suggested by BIPA would go something like this:

  • Face recognition is dependent on biometric data which is extracted from an image or in other form of scan.
  • To collect and store biometric data one needs the consent of the person whose data is collected.
  • The data has to be securely stored.
  • The data has to be destroyed in a timely manner.
  • If there is consent, secure storage, and timely deletion of the data, then the system/service can be said to not be unethical.

There are a number of interesting points to be made about this position. First, it is not the gathering, storing and providing access to images of people that is at issue. Face recognition is an ethical issue because biometric data about a person is being extracted, stored and used. Thus Google Image Search is not an issue as they are storing data about whole images while FB stores information about the face of individual people (along with associated information.)

This raises issues about the nature of biometric data. What is the line between a portrait (image) and biometric information? Would gathering biographical data about a person become biometric at some point if it contained descriptions of their person?

Second, my reading is that a service like Clearview AI could also be sued if they scrape images of people in Illinois and extract biometric data. This could provide an answer to the question of what is ethically wrong about the Clearview AI service. (See my previous blog entry on this.)

Third, I think there is a missing further condition that should be specified, names that the company gathering the biometric data should identify the purpose for which they are gathering it when seeking consent and limit their use of the data to the identified uses. When they no longer need the data for the identified use, they should destroy it. This is essentially part of the PIPA principle of Limiting Use, Disclosure and Retention. It is assumed that if one is to delete data in a timely fashion there will be some usage criteria that determine timeliness, but that isn’t always the case. Sometimes it is just the passage of time.

Of course, the value of data mining is often in the unanticipated uses of data like biometric data. Unanticipated uses are, by definition, not uses that were identified when seeking consent, unless the consent was so broad as to be meaningless.

No doubt more issues will occur to me.

The Secretive Company That Might End Privacy as We Know It

“I’ve come to the conclusion that because information constantly increases, there’s never going to be privacy,” Mr. Scalzo said. “Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.”

The New York Times has an important story about Clearview AI, The Secretive Company That Might End Privacy as We Know It. Clearview, which is partly funded by Peter Thiel, scraped a number of social media sites for pictures of people and has developed an AI application that you can upload a picture to and it tries to recognize the person and show you their social media trail. They are then selling the service to police forces.

Needless to say, this is a disturbing use of face recognition for surveillance using our own social media. They are using public images that anyone of us could look at, but at a scale no person could handle. They are doing something that would almost be impossible to stop, even with legislation. What’s to stop the intelligence services of another country doing this (and more)? Perhaps privacy is no longer possible.

Continue reading The Secretive Company That Might End Privacy as We Know It

ParityBOT: Twitter bot

ParityBOT is a chatbot developed here in Edmonton that tweets positive things about women in politics in response to hateful tweets. It send empowering messages.

You can read about it in a CBC story, Engineered-in-Edmonton Twitter bot combats misogyny on the campaign trail.

The bot follows all women candidates in the election and uses some sort of AI or sentiment detection to identify nasty tweets aimed at them and then responds with a positive message from a collection crowdsources from the public. What isn’t clear is if the positive message is sent to the offending tweeter or just posted generally?

ParityBOT was developed by ParityYEG which is a collaboration between the Alberta Machine Intelligence Institute and scientist Kory Mathewson.

Incels, Pickup Artists, and the World of Men’s Seduction Training

On Monday, April 23rd, a 25-year old man named Alek Minassian drove a rented van down a sidewalk in Toronto, killing eight women and two men. The attack was reminiscent of recent Islamist terror attacks in New York, London, Stockholm, Nice, and Berlin. Just before his massacre, he posted a note on Facebook announcing: “Private (Recruit) Minassian Infantry 00010, wishing to speak to Sgt 4chan please. C23249161, the Incel Rebellion has already begun! We will overthrow all the Chads and Stacys! All hail the Supreme Gentleman Elliot Rodger!”

Anders Wallace has published an essay in 3 Quarks Daily on Incels, Pickup Artists, and the World of Men’s Seduction Training that starts with the recent attack in Toronto by a self-styled “incel” Minassian who adapted a terror tactic and moves on to seduction training. Wallace has been participated in seduction training and immersed himself in the “manosphere” which he defines thus:

The manosphere is a digital ecosystem of blogs, podcasts, online forums, and hidden groups on sites like Facebook and Tumblr. Here you’ll find a motley crew of men’s rights activists, white supremacists, conspiracy theorists, angry divorcees, disgruntled dads, male victims of abuse, self-improvement junkies, bodybuilders, bored gamers, alt-righters, pickup artists, and alienated teenagers. What they share is a vicious response to feminists (often dubbed “feminazis”) and so-called “social justice warriors.” They blame their anger on identity politics, affirmative action, and the neoliberal state, which they perceive are compromising equality and oppressing their own free speech.

The essay doesn’t provide easy answers though one can find temptations (like the idea that these incels are men who were undermothered), instead it nicely surveys the loose network of ideas, resentments and desires that animate the manosphere. What stands out is the lack of alternative models of heterosexual masculinity. Too many of the mainstream role models we are presented with (from sports to media role models to superheros) reinforce characteristics incels want training in from stoicism to aggression.

After the Facebook scandal it’s time to base the digital economy on public v private ownership of data

In a nutshell, instead of letting Facebook get away with charging us for its services or continuing to exploit our data for advertising, we must find a way to get companies like Facebook to pay for accessing our data – conceptualised, for the most part, as something we own in common, not as something we own as individuals.

Evgeny Morozov has a great essay in The Guardian on how After the Facebook scandal it’s time to base the digital economy on public v private ownership of data. He argues that better data protection is not enough. We need to “to articulate a truly decentralised, emancipatory politics, whereby the institutions of the state (from the national to the municipal level) will be deployed to recognise, create, and foster the creation of social rights to data.” In Alberta that may start with a centralized clinical information system called Connect Care managed by the Province. The Province will presumably control access to our data to those researchers and health-care practitioners that commit to using access appropriately. Can we imagine a model where Connect Care is expanded to include social data that we can then control and give others (businesses) access to?

More on Cambridge Analytica

More stories are coming out about Cambridge Analytica and the scraping of Facebook data. The Guardian has some important new articles:

Perhaps the most interesting article is in The Conversation and argues that Claims about Cambridge Analytica’s role in Africa should be taken with a pinch of saltThe article carefully sets out evidence that CA didn’t have the effect they were hired to have in either the Nigerian election (when they failed to get Goodluck Jonathan re-elected) or the Kenyan election where they may have helped Uhuru Kenyatta stay in power. The authors (Gabrielle Lynch, Justin Willis, and Nic Cheeseman) talk about how,

Ahead of the elections, and as part of a comparative research project on elections in Africa, we set up multiple profiles on Facebook to track social media and political adverts, and found no evidence that different messages were directed at different voters. Instead, a consistent negative line was pushed on all profiles, no matter what their background.

They also point out that the majority of Kenyans are not on Facebook and that negative advertising has a long history. They conclude that exaggerating what they can do is what CA does.

Mother Jones has another story, one of the best summaries around, Cloak and Data, that questions the effectiveness of Cambridge Analytica when it comes to the Trump election. They point out how CA’s work before in Virginia and for Cruz at the beginning of the primaries doesn’t seem to have worked. They go on to suggest that CA had little to do with the Trump victory which instead was ascribed by Parscale, the head of digital operations, to investing heavily in Facebook advertising.

During an interview with 60 Minutes last fall, Parscale dismissed the company’s psychographic methods: “I just don’t think it works.” Trump’s secret strategy, he said, wasn’t secret at all: The campaign went all-in on Facebook, making full use of the platform’s advertising tools. “Donald Trump won,” Parscale said, “but I think Facebook was the method.”

The irony may be that Cambridge Analytica is brought down by its boasting, not what it actually did. Further irony is how it may bring down Facebook and finally draw attention to how our data is used to manipulate us, even though it didn’t work.

The story of Cambridge Analytica’s rise—and its rapid fall—in some ways parallels the ascendance of the candidate it claims it helped elevate to the presidency. It reached the apex of American politics through a mix of bluffing, luck, failing upward, and—yes—psychological manipulation. Sound familiar?