Avast closes Jumpshot over data privacy backlash, but transparency is the real issue

Avast will shutter its Jumpshot subsidiary just days after an exposé targeted the way it sold user data. But transparency remains the bigger issue.

From Venturbeat (via Slashdot) the news that antivirus company Avast closes Jumpshot over data privacy backlash, but transparency is the real issue (Paul Sawers, Jan. 30, 2020). Avast had been found to have been gathering detailed data about users of its antivirus tools and then selling anonymized data through Jumpshot. The data was of sufficient detail (apparently down to an “all clicks feed”) that it would probably be possible to deanonymize data. So what was the ethical problem here?

As the title of the story advertises the issue was not that Avast was concealing what it was doing, it is more a matter of how transparent it was about what it was doing. The data collection was “opt out” and so you had to go find the setting rather than being asked if you wanted to “opt in.” Jumpstart was apparently fairly open about their business. The information the provided to help you make a decision was not particularly deceptive (see image below), but it is typical of software to downplay the identifiability of data collected.

Some of the issue is around consent. What realistically constitutes consent these days? Does one need to opt-in for there to be meaningful consent? Does one need sufficient information to make a decision, and if so, what would that be?

In 2020, let’s stop AI ethics-washing and actually do something – MIT Technology Review

But talk is just that—it’s not enough. For all the lip service paid to these issues, many organizations’ AI ethics guidelines remain vague and hard to implement.

Thanks to Oliver I came across this call for an end to ethics-washing by artificial intelligence reporter Karen Hao in the MIT Technology Review, In 2020, let’s stop AI ethics-washing and actually do something The call echoes something I’ve been talking about – that we need to move beyond guidelines, lists of principles, and checklists.  She nicely talks about some of the initiatives to hold AI accountable that are taking place and what should happen. Read on if you want to see what I think we need.

Continue reading In 2020, let’s stop AI ethics-washing and actually do something – MIT Technology Review

From Facebook: An Update on Building a Global Oversight Board

We’re sharing the progress we’ve made in building a new organization with independent oversight over how Facebook makes decisions on content.

Brent Harris, Director of Governance and Global Affairs at Facebook has an interesting blog post that provides An Update on Building a Global Oversight Board (Dec. 12, 2019). Facebook is developing an independent Global Oversight Board which will be able to make decisions about content on Facebook.

I can’t help feeling that Facebook is still trying to avoid being a content company. Instead of admitting that parts of what they do matches what media content companies do, they want to stick to a naive, but convenient, view that Facebook is a technological facilitator and content comes from somewhere else. This, like the view that bias in AI is always in the data and not in the algorithms, allows the company to continue with the pretence that they are about clean technology and algorithms. All the old human forms of judgement will be handled by an independent GOB so Facebook doesn’t have to admit they might have a position on something.

What Facebook should do is admit that they are a media company and that they make decisions that influence what users see (or not.) They should do what newspapers do – embrace the editorial function as part of what it means to deal in content. There is still, in newspapers, an affectation to the separation between opinionated editorial and objective reporting functions, but it is one that is open for discussion. What Facebook is doing is a not-taking-responsibility, but sequestering of responsibility. This will allow Facebook to play innocent as wave after wave of fake news stories sweep through their system.

Still, it is an interesting response by a company that obviously wants to deal in news for the economic value, but doesn’t want to corrupted by it.

Bird Scooter Charging Is ‘One Level Up From Collecting Cans’–But These Entrepreneurs Are Making a Lucrative Business of It

Scooters have come to Edmonton. Both Bird and Lime dumped hundreds of scooters in my neighbourhood just before the Fringe festival. Users are supposed to use bike lanes and shared-use paths, but of course they tend to use sidewalks. Fortunately most people using them seem to tying them for a lark rather than seriously trying to get somewhere.

I can’t help thinking this business is a bit like the Segway (a company apparently making money now selling the scooters) – a great concept that appeals to venture capital, but not something that will work economically. For example, what will happen in the winter? Will the companies leave them around in the snow or pack them up for the season?

The economic model of these companies is also interesting. They seem to have minimal staff in each city. They pay chargers a to find the scooters and charge them each night. More gig-economy work that may not provide a living! See  Bird Scooter Charging Is ‘One Level Up From Collecting Cans’–But These Entrepreneurs Are Making a Lucrative Business of It.

At the end of the day, does anyone make enough to make this viable? One wonders if the scooter companies are selling the data they gather?

Facebook refused to delete an altered video of Nancy Pelosi. Would the same rule apply to Mark Zuckerberg?

‘Imagine this for a second…’ (2019) from Bill Posters on Vimeo.

A ‘deepfake’ of Zuckerberg was uploaded to Instagram and appears to show him delivering an ominous message

The issue of “deepfakes” is big on the internet after someone posted a slowed down video of Nancy Pelosi to make her look drunk and then, after Facebook didn’t take it down a group posted a fake Zuckerberg video. See  Facebook refused to delete an altered video of Nancy Pelosi. Would the same rule apply to Mark Zuckerberg? This video was created by artists Posters and Howe and is part of a series

While the Pelosi video was a crude hack, the Zuckerberg video used AI technology from Canny AI, a company that has developed tools for replacing dialogue in video (which has legitimate uses in localization of educational content, for example.) The artists provided a voice actor with a script and then the AI trained on existing video of Zuckerberg and that of the voice actor to morph Zuckerberg’s facial movements to match the actor’s.

What is interesting is that the Zuckerberg video is part of an installation called Spectre with a number of deliberate fakes that were exhibited at  a venue associated with the Sheffield Doc|Fest. Spectre, as the name suggests, both suggests how our data can be used to create ghost media of us, but also reminds us playfully of that fictional criminal organization that haunted James Bond. We are now being warned that real, but spectral organizations could haunt our democracy, messing with elections anonymously.

Schools Are Deploying Massive Digital Surveillance Systems. The Results Are Alarming

“every move you make…, every word you say, every game you play…, I’ll be watching you.” (The Police – Every Breath You Take)

Education Week has an alarming story about how schools are using surveillance, Schools Are Deploying Massive Digital Surveillance Systems. The Results Are Alarming. The story is by Benjamin Harold and dates from May 30, 2019. It talks not only about the deployment of cameras, but the use of companies like Social Sentinel, Securly, and Gaggle that monitor social media or school computers.

Every day, Gaggle monitors the digital content created by nearly 5 million U.S. K-12 students. That includes all their files, messages, and class assignments created and stored using school-issued devices and accounts.

The company’s machine-learning algorithms automatically scan all that information, looking for keywords and other clues that might indicate something bad is about to happen. Human employees at Gaggle review the most serious alerts before deciding whether to notify school district officials responsible for some combination of safety, technology, and student services. Typically, those administrators then decide on a case-by-case basis whether to inform principals or other building-level staff members.

The story provides details that run from the serious to the absurd. It mentions concerns by the ACLU that such surveillance can desensitize children to surveillance and make it normal. The ACLU story makes a connection with laws that forbid federal agencies from studying or sharing data that could make the case for gun control. This creates a situation where the obvious ways to stop gun violence in schools aren’t studied so surveillance companies step in with solutions.

Needless to say, surveillance has its own potential harms beyond desensitization. The ACLU story lists the following potential harms:

  • Suppression of students’ intellectual freedom, because students will not want to investigate unpopular or verboten subjects if the focus of their research might be revealed.
  • Suppression of students’ freedom of speech, because students will not feel at ease engaging in private conversations they do not want revealed to the world at large.
  • Suppression of students’ freedom of association, because surveillance can reveal a students’ social contacts and the groups a student engages with, including groups a student might wish to keep private, like LGBTQ organizations or those promoting locally unpopular political views or candidates.
  • Undermining students’ expectation of privacy, which occurs when they know their movements, communications, and associations are being watched and scrutinized.
  • False identification of students as safety threats, which exposes them to a range of physical, emotional, and psychological harms.

As with the massive investment in surveillance for national security and counter terrorism purposes, we need to ask whether the cost of these systems, both financial and other, is worth it. Unfortunately, protecting children, like protecting from terrorism is hard to put a price on which makes it hard to argue against such investments.

Amazon’s Home Surveillance Company Is Putting Suspected Petty Thieves in its Advertisements

Ring, Amazon’s doorbell company, posted a video of a woman suspected of a crime and asked users to call the cops with information.

VICE has a story about how Amazon’s Home Surveillance Company Is Putting Suspected Petty Thieves in its Advertisements. The story is that Ring took out an ad which showed suspicious behaviour. A woman who is presumably innocent until proven guilty is shown clearly in order to sell more alarm systems. This information came from the police.

Needless to say, it raises ethical issues around community policing. Ring has a “Neighbors” app that lets vigilantes report suspicious behaviour creating a form of digital neighbourhood watch. The article references a Motherboard article that suggests that such digital neighbourhood surveillance can lead to racism.

Beyond creating a “new neighborhood watch,” Amazon and Ring are normalizing the use of video surveillance and pitting neighbors against each other. Chris Gilliard, a professor of English at Macomb Community College who studies institutional tech policy, told Motherboard in a phone call that such a “crime and safety” focused platforms can actively reinforce racism.

All we need now is for there to be AI in the mix. Face recognition so you can identify anyone walking past your door.

Undersea Cables – Huawei’s ace in the hole

About a decade ago, Huawei entered the business by setting up a joint venture with British company Global Marine Systems. It expanded its presence by laying short links in regions like Southeast Asia and the Russian Far East. But last September, Huawei surprised industry executives in Japan, the U.S. and Europe by completing a 6,000 km trans-Atlantic cable linking Brazil with Cameroon.

This showed Huawei has acquired advanced capabilities, even though it is still far behind the established players in terms of experience and cable volume.

During the 2015-2020 period, Huawei is expected to complete 20 new cables — mostly short ones of less than 1,000 km. Even when these are finished, Huawei’s market share will be less than 10%. Over the long term, however, the company could emerge as a player to be reckoned with.

The Nikkei Asian Review has an interesting article on Undersea cables — Huawei’s ace in the holeMy impression from Snowden leaks and other readings is that the US and UK have taps at a lot of the cable landing stations and that allows them to listen in on a large proportion of international internet traffic. If China starts building an alternative global network that could provide an alternative network backbone.

We Built a (Legal) Facial Recognition Machine for $60

The law has not caught up. In the United States, the use of facial recognition is almost wholly unregulated.

The New York Times has an opinion piece by Sahil Chinoy on how (they) We Built a (Legal) Facial Recognition Machine for $60. They describe an inexpensive experiment they ran where they took footage of people walking past some cameras installed in Bryant Park and compared them to known people who work in the area (scraped from web sites of organizations that have offices in the neighborhood.) Everything they did used public resources that others could use. The cameras stream their footage here. Anyone can scrape the images. The image database they gathered came from public web sites. The software is a service (Amazon’s Rekognition?) The article asks us to imagine the resources available to law enforcement.

I’m intrigued by how this experiment by the New York Times. It is a form of design thinking where they have designed something to help us understand the implications of a technology rather than just writing about what others say. Or we could say it is a form of journalistic experimentation.

Why does facial recognition spook us? Is recognizing people something we feel is deeply human? Or is it the potential for recognition in all sorts of situations. Do we need to start guarding our faces?

Facial recognition is categorically different from other forms of surveillance, Mr. Hartzog said, and uniquely dangerous. Faces are hard to hide and can be observed from far away, unlike a fingerprint. Name and face databases of law-abiding citizens, like driver’s license records, already exist. And for the most part, facial recognition surveillance can be set up using cameras already on the streets.

This is one of a number of excellent articles by the New York Times that is part of their Privacy Project.

Research Team Security

One of the researchers in the GamerGate Reactions team has created a fabulous set of recommendations for team members doing dangerous research. See Security_Recommendations_2018_v2.0. This document brings together in one place a lot of information and links on how to secure your identity and research. The researcher put this together in support of a panel that I am chairing this afternoon on Risky Research that is part of a day of panels/workshops following the Edward Snowden talk yesterday evening. (You can see my blog entry on Snowden’s talk here.) The key topics covered include:

  • Basic Security Measures
  • Use End-to-End Encryption for Communications  Encrypt Your Computer
  • Destroy All Information
  • Secure Browsing
  • Encrypt all Web Traffic
  • Avoiding Attacks
  • On Preventing Doxing
  • Dealing with Harassment