Reflecting on one very, very strange year at Uber

As most of you know, I left Uber in December and joined Stripe in January. I’ve gotten a lot of questions over the past couple of months about why I left and what my time at Uber was like. It’s a strange, fascinating, and slightly horrifying story that deserves to be told while it is still fresh in my mind, so here we go.

The New York Times has a short review of Susan Fowler’s memoir, Her Blog Post About Uber Upended Big Tech. Now She’s Written a Memoir. Susan Fowler is the courageous engineer who documented the sexism at Uber in a blog post, Reflecting on one very, very strange year at Uber — Susan Fowler. Her blog post from 2017 (the opening of which is quoted above) was important in that drew attention to the bro culture in Silicon Valley. It also led to investigations within Uber and eventually to the co-founder and CEO Travis Kalanick being ousted.

Continue reading Reflecting on one very, very strange year at Uber

It’s the (Democracy-Poisoning) Golden Age of Free Speech

And sure, it is a golden age of free speech—if you can believe your lying eyes. Is that footage you’re watching real? Was it really filmed where and when it says it was? Is it being shared by alt-right trolls or a swarm of Russian bots? Was it maybe even generated with the help of artificial intelligence?

There have been a number of stories bemoaning what has become of free speech. Fore example, WIRED has one title, It’s the (Democracy-Poisoning) Golden Age of Free Speech by Zeynep Tufekci (Jan. 16, 2020). In it she argues that access to an audience for your speech is no longer a matter of getting into centralized media, it is now a matter of getting attention. The world’s attention is managed by a very small number of platforms (Facebook, Google and Twitter) using algorithms that maximize their profits by keeping us engaged so they can sell our attention for targeted ads.

Continue reading It’s the (Democracy-Poisoning) Golden Age of Free Speech

Eyal Weizman: The algorithm is watching you

The London Review of Books has a blog entry by Eyal Weizman on how The algorithm is watching you (Feb. 19, 2020). Eyal Weizman, the founding director of Forensic Architecture, writes that he was denied entry into the USA because an algorithm had identified a security issue. He was going to the US for a show in Miami titled True to Scale.

Setting aside the issue of how the US government seems to now be denying entry to people who do inconvenient investigations, something a country that prides itself on human rights shouldn’t do, the use of an algorithm as a reason is disturbing for a number of reasons:

  • As Weizman tells the story, the embassy officer couldn’t tell what triggered the algorithm. That would seem to violate important principles in the use of AIs; namely that an AI used in making decisions should be transparent and able to explain why it made the decision. Perhaps the agency involved doesn’t want to reveal the nasty logic behind their algorithms.
  • Further, there is no recourse, another violation of principle for AIs, namely that they should be accountable and there should be mechanisms to challenge a decision.
  • The officer then asked Weizman to provide them with more information, like his travel for the last 15 years and contacts, which he understandably declined to do. In effect the system was asking him to surveil himself and share that with a foreign government. Are we going to be put in the situation where we have to surrender privacy in order to get access to government services? We do that already for commercial services.
  • As Weizman points out, this shows the “arbitrary logic of the border” that is imposed on migrants. Borders have become grey zones where the laws inside a country don’t apply and the worst of a nation’s phobias are manifest.

PETER ROCKWELL Obituary

ROCKWELL, Peter Barstow Sculptor, Scholar and Teacher, dies at 83 Died peacefully on February 6, 2020 in Danvers, MA.

My father passed away last Thursday, Feb. 6th. I’ve been gathering information and writing a short and longer obituary. I’ve also been going through my father’s email writing people he was in touch with. In a strange way I feel I am rolling up his life.

My sister posted an obituary in the Boston Globe: PETER ROCKWELL Obituary – Boston, MA | Boston Globe. Interestingly the Globe ran their own short article Peter Rockwell, a sculptor and a son of Norman Rockwell, dies at 83.

What is touching are all the heartfelt condolences coming in from students, friends and colleagues that enjoyed his company and work.

Show and Tell at CHRIN


Stéphane Pouyllau’s photo of me presenting

Michael Sinatra invited me to a “show and tell” workshop at the new Université de Montréal campus where they have a long data wall. Sinatra is the Director of CRIHN (Centre de recherche interuniversitaire sur les humanitiés numériques) and kindly invited me to show what I am doing with Stéfan Sinclair and to see what others at CRIHN and in France are doing.

Continue reading Show and Tell at CHRIN

Facebook to Pay $550 Million to Settle Facial Recognition Suit

It was another black mark on the privacy record of the social network, which also reported its quarterly earnings.

The New York Times has a story on how Facebook to Pay $550 Million to Settle Facial Recognition Suit (Natasha Singer and Mike Isaac, Jan. 29, 2020.) The Illinois case has to do with Facebook’s face recognition technology that was part of Tag Suggestions that would suggest names for people in photos. Apparently in Illinois it is illegal to harvest biometric data without consent. The Biometric Information Privacy Act (BIPA) passed in 2008 “guards against the unlawful collection and storing of biometric information.” (Wikipedia entry)

BIPA suggests a possible answer to the question of what is unethical about face recognition. While I realize that a law is not ethics (and vice versa) BIPA hints at one of the ways we can try to unpack the ethics of face recognition. The position suggested by BIPA would go something like this:

  • Face recognition is dependent on biometric data which is extracted from an image or in other form of scan.
  • To collect and store biometric data one needs the consent of the person whose data is collected.
  • The data has to be securely stored.
  • The data has to be destroyed in a timely manner.
  • If there is consent, secure storage, and timely deletion of the data, then the system/service can be said to not be unethical.

There are a number of interesting points to be made about this position. First, it is not the gathering, storing and providing access to images of people that is at issue. Face recognition is an ethical issue because biometric data about a person is being extracted, stored and used. Thus Google Image Search is not an issue as they are storing data about whole images while FB stores information about the face of individual people (along with associated information.)

This raises issues about the nature of biometric data. What is the line between a portrait (image) and biometric information? Would gathering biographical data about a person become biometric at some point if it contained descriptions of their person?

Second, my reading is that a service like Clearview AI could also be sued if they scrape images of people in Illinois and extract biometric data. This could provide an answer to the question of what is ethically wrong about the Clearview AI service. (See my previous blog entry on this.)

Third, I think there is a missing further condition that should be specified, names that the company gathering the biometric data should identify the purpose for which they are gathering it when seeking consent and limit their use of the data to the identified uses. When they no longer need the data for the identified use, they should destroy it. This is essentially part of the PIPA principle of Limiting Use, Disclosure and Retention. It is assumed that if one is to delete data in a timely fashion there will be some usage criteria that determine timeliness, but that isn’t always the case. Sometimes it is just the passage of time.

Of course, the value of data mining is often in the unanticipated uses of data like biometric data. Unanticipated uses are, by definition, not uses that were identified when seeking consent, unless the consent was so broad as to be meaningless.

No doubt more issues will occur to me.

Avast closes Jumpshot over data privacy backlash, but transparency is the real issue

Avast will shutter its Jumpshot subsidiary just days after an exposé targeted the way it sold user data. But transparency remains the bigger issue.

From Venturbeat (via Slashdot) the news that antivirus company Avast closes Jumpshot over data privacy backlash, but transparency is the real issue (Paul Sawers, Jan. 30, 2020). Avast had been found to have been gathering detailed data about users of its antivirus tools and then selling anonymized data through Jumpshot. The data was of sufficient detail (apparently down to an “all clicks feed”) that it would probably be possible to deanonymize data. So what was the ethical problem here?

As the title of the story advertises the issue was not that Avast was concealing what it was doing, it is more a matter of how transparent it was about what it was doing. The data collection was “opt out” and so you had to go find the setting rather than being asked if you wanted to “opt in.” Jumpstart was apparently fairly open about their business. The information the provided to help you make a decision was not particularly deceptive (see image below), but it is typical of software to downplay the identifiability of data collected.

Some of the issue is around consent. What realistically constitutes consent these days? Does one need to opt-in for there to be meaningful consent? Does one need sufficient information to make a decision, and if so, what would that be?

The Secretive Company That Might End Privacy as We Know It

“I’ve come to the conclusion that because information constantly increases, there’s never going to be privacy,” Mr. Scalzo said. “Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.”

The New York Times has an important story about Clearview AI, The Secretive Company That Might End Privacy as We Know It. Clearview, which is partly funded by Peter Thiel, scraped a number of social media sites for pictures of people and has developed an AI application that you can upload a picture to and it tries to recognize the person and show you their social media trail. They are then selling the service to police forces.

Needless to say, this is a disturbing use of face recognition for surveillance using our own social media. They are using public images that anyone of us could look at, but at a scale no person could handle. They are doing something that would almost be impossible to stop, even with legislation. What’s to stop the intelligence services of another country doing this (and more)? Perhaps privacy is no longer possible.

Continue reading The Secretive Company That Might End Privacy as We Know It

36 Years Ago Today, Steve Jobs Unveiled the First Macintosh

MacRumors has a story about how today is the 36th anniversary of the unveiling of the Macintosh. See 36 Years Ago Today, Steve Jobs Unveiled the First Macintosh. At the time I was working in Kuwait and had a Apple II clone. When a Macintosh came to a computer store I went down with a friend to try it. I must admit the Graphical User Interface (GUI) appealed to me immediately despite the poor performance. When I got back to Canada in 1985 to start graduate school I bought my first Macintosh, a 512K with a second disk drive. Later I hacked a RAM upgrade and got a small hard drive. Of course now I regret selling the computer to a friend in order to upgrade.

How Science Fiction Imagined the 2020s

What ‘Blade Runner,’ cyberpunk, and Octavia Butler had to say about the age we’re entering now

2020 is not just any year, but because it is shorthand for perfect vision, it is a date that people liked to imagine in the past. OneZero, a Medium publication has a nice story on How Science Fiction Imagined the 2020s (Jan. 17, 2020). The article looks at stories like Blade Runner (1982) that predicted what these years would be like. How accurate were they? Did they get the spirit of this age right? The author, Tim Maugham, reflects on why do many stories of the 1980s and early 1990s seemed to be concerned with many of the same issues that concern us now. He seems a similar concern with inequality and book/bust economies. He also sees sci-fi writers like Octavia Butler paying attention back then to climate change.

It was also the era when climate change started to make the news for the first time, and while it didn’t find its way into the public consciousness quickly enough, it certainly seemed to have grabbed the interest of science fiction writers.

Continue reading How Science Fiction Imagined the 2020s