The Prodigal Techbro

The journey feels fake. These ‘I was lost but now I’m found, please come to my TED talk’ accounts typically miss most of the actual journey, yet claim the moral authority of one who’s ‘been there’ but came back. It’s a teleportation machine, but for ethics.

Source:

Maria Farrell, a technology policy critic, has written a nice essay on The Prodigal Techbro. She sympathizes with technology bros who have changed their mind, in the sense of wishing them well, but feels that they shouldn’t get so much attention. Instead we need to care for those who were critics from the beginning and who really need the attention and care. She maps this onto the parable of the Prodigal Son; why does the son who was lost get all the attention? She makes it an ethical issue, which is interesting, one I imagine fitting an ethics of care.

She ends the essay with this advice to techies who are changing their mind:

So, if you’re a prodigal tech bro, do us all a favour and, as Rebecca Solnit says, help “turn down the volume a little on the people who always got heard”:

  • Do the reading and do the work. Familiarize yourself with the research and what we’ve already tried, on your own time. Go join the digital rights and inequality-focused organizations that have been working to limit the harms of your previous employers and – this is key – sit quietly at the back and listen.
  • Use your privilege and status and the 80 percent of your network that’s still talking to you to big up activists who have been in the trenches for years already—especially women and people of colour. Say ‘thanks but no thanks’ to that invitation and pass it along to someone who’s done the work and paid the price.
  • Understand that if you are doing this for the next phase of your career, you are doing it wrong. If you are doing this to explain away the increasingly toxic names on your resumé, you are doing it wrong. If you are doing it because you want to ‘give back,’ you are doing it wrong.

Do this only because you recognize and can say out loud that you are not ‘giving back’, you are making amends for having already taken far, far too much.

Reflecting on one very, very strange year at Uber

As most of you know, I left Uber in December and joined Stripe in January. I’ve gotten a lot of questions over the past couple of months about why I left and what my time at Uber was like. It’s a strange, fascinating, and slightly horrifying story that deserves to be told while it is still fresh in my mind, so here we go.

The New York Times has a short review of Susan Fowler’s memoir, Her Blog Post About Uber Upended Big Tech. Now She’s Written a Memoir. Susan Fowler is the courageous engineer who documented the sexism at Uber in a blog post, Reflecting on one very, very strange year at Uber — Susan Fowler. Her blog post from 2017 (the opening of which is quoted above) was important in that drew attention to the bro culture in Silicon Valley. It also led to investigations within Uber and eventually to the co-founder and CEO Travis Kalanick being ousted.

Continue reading Reflecting on one very, very strange year at Uber

Avast closes Jumpshot over data privacy backlash, but transparency is the real issue

Avast will shutter its Jumpshot subsidiary just days after an exposé targeted the way it sold user data. But transparency remains the bigger issue.

From Venturbeat (via Slashdot) the news that antivirus company Avast closes Jumpshot over data privacy backlash, but transparency is the real issue (Paul Sawers, Jan. 30, 2020). Avast had been found to have been gathering detailed data about users of its antivirus tools and then selling anonymized data through Jumpshot. The data was of sufficient detail (apparently down to an “all clicks feed”) that it would probably be possible to deanonymize data. So what was the ethical problem here?

As the title of the story advertises the issue was not that Avast was concealing what it was doing, it is more a matter of how transparent it was about what it was doing. The data collection was “opt out” and so you had to go find the setting rather than being asked if you wanted to “opt in.” Jumpstart was apparently fairly open about their business. The information the provided to help you make a decision was not particularly deceptive (see image below), but it is typical of software to downplay the identifiability of data collected.

Some of the issue is around consent. What realistically constitutes consent these days? Does one need to opt-in for there to be meaningful consent? Does one need sufficient information to make a decision, and if so, what would that be?

There are 2,373 squirrels in Central Park. I know because I helped count them

I volunteered for the first squirrel census in the city. Here’s what I learned, in a nutshell.

From Lauren Klein on Twitter I learned about a great New York Times article on  There are 2,373 squirrels in Central Park. I know because I helped count them. The article is by Denise Lau (Jan. 8, 2020.) As Klein points out, it is about the messiness of data collection. (Note that she has a book coming out on Data Feminism with Catherine D’Ignazio.)

From Facebook: An Update on Building a Global Oversight Board

We’re sharing the progress we’ve made in building a new organization with independent oversight over how Facebook makes decisions on content.

Brent Harris, Director of Governance and Global Affairs at Facebook has an interesting blog post that provides An Update on Building a Global Oversight Board (Dec. 12, 2019). Facebook is developing an independent Global Oversight Board which will be able to make decisions about content on Facebook.

I can’t help feeling that Facebook is still trying to avoid being a content company. Instead of admitting that parts of what they do matches what media content companies do, they want to stick to a naive, but convenient, view that Facebook is a technological facilitator and content comes from somewhere else. This, like the view that bias in AI is always in the data and not in the algorithms, allows the company to continue with the pretence that they are about clean technology and algorithms. All the old human forms of judgement will be handled by an independent GOB so Facebook doesn’t have to admit they might have a position on something.

What Facebook should do is admit that they are a media company and that they make decisions that influence what users see (or not.) They should do what newspapers do – embrace the editorial function as part of what it means to deal in content. There is still, in newspapers, an affectation to the separation between opinionated editorial and objective reporting functions, but it is one that is open for discussion. What Facebook is doing is a not-taking-responsibility, but sequestering of responsibility. This will allow Facebook to play innocent as wave after wave of fake news stories sweep through their system.

Still, it is an interesting response by a company that obviously wants to deal in news for the economic value, but doesn’t want to corrupted by it.

The weird, wonderful world of Y2K survival guides

The category amounted to a giant feedback loop in which the existence of Y2K alarmism led to more of the same.

Harry McCracken in Fast Company has a great article on The weird, wonderful world of Y2K survival guides: A look back (Dec. 13, 2019).The article samples some of the hype around the disruptive potential of the millenium. Particularly worrisome are the political aspects of the folly. People (again) predicted the fall of the government and the need to prepare for the ensuing chaos. (Why is it that some people look so forward to such collapse?)

Technical savvy didn’t necessarily inoculate an author against millennium-bug panic. Edward Yourdon was a distinguished software architect with plenty of experience relevant to the challenge of assessing the Y2K bug’s impact. His level of Y2K gloominess waxed and waned, but he was prone to declarations such as “my own personal Y2K plans include a very simple assumption: the government of the U.S., as we currently know it, will fall on 1/1/2000. Period.”

Interestingly, few people panicked despite all the predictions. Most people, went out and celebrated.

All of this should be a warning for those of us who are tempted to predict that artificial intelligence or social media will lead to some sort of disaster. There is an ethics to predicting ethical disruption. Disruption, almost by definition, never happens as you thought it would.

50th Anniversary of the Internet

Page from notebook documenting connection on the 29th, Oct. 1969. From UCLA special collections via this article

50 years ago on October 29th, 1969 was when the first two nodes of the ARPANET are supposed to have connected. There are, of course, all sorts of caveats, but it seems to have been one of the first times someone remote log in from one location to another on what became the internet. Gizmodo has an interview with Bradley Fidler on the history that is worth reading.

Remote access was one of the reasons the internet was funded by the US government. They didn’t want to give everyone their own computer. Instead the internet (ARPANET) would let people use the computers of others remotely (See Hafner & Lyon 1996).

Interestingly, I also just read a story that the internet (or at least North America, has just run out of IP addresses. The IPv4 addresses have been exhausted and not everyone has switched to IPv6 that has many more available addresses. I blame the Internet of Things (IoT) for assigning addresses to every “smart” object.

Hafner, K., & Lyon, M. (1996). Where Wizards Stay Up Late: The Origins of the Internet. New York: Simon & Shuster.

The war on (unwanted) dick pics has begun

Legislators and tech companies are finally working to protect women from receiving unwanted sexually explicit images online – will it work?

The war on (unwanted) dick pics has begun according to a Guardian article about a web developer who asked people to send her “dick pics” so she could train a machine to recognize them and deal with them. The Guardian rightly asks why tech-companies don’t provide more tools for users to deal with harassing messages.

The interesting thing is how many women get them (53% get lewd images) and how many men have sent one (27% of millenial men). (Data from Pew Online Harassment 2017 and the I’ll Show You Mine study.)

Continue reading The war on (unwanted) dick pics has begun

Bird Scooter Charging Is ‘One Level Up From Collecting Cans’–But These Entrepreneurs Are Making a Lucrative Business of It

Scooters have come to Edmonton. Both Bird and Lime dumped hundreds of scooters in my neighbourhood just before the Fringe festival. Users are supposed to use bike lanes and shared-use paths, but of course they tend to use sidewalks. Fortunately most people using them seem to tying them for a lark rather than seriously trying to get somewhere.

I can’t help thinking this business is a bit like the Segway (a company apparently making money now selling the scooters) – a great concept that appeals to venture capital, but not something that will work economically. For example, what will happen in the winter? Will the companies leave them around in the snow or pack them up for the season?

The economic model of these companies is also interesting. They seem to have minimal staff in each city. They pay chargers a to find the scooters and charge them each night. More gig-economy work that may not provide a living! See  Bird Scooter Charging Is ‘One Level Up From Collecting Cans’–But These Entrepreneurs Are Making a Lucrative Business of It.

At the end of the day, does anyone make enough to make this viable? One wonders if the scooter companies are selling the data they gather?

Facebook refused to delete an altered video of Nancy Pelosi. Would the same rule apply to Mark Zuckerberg?

‘Imagine this for a second…’ (2019) from Bill Posters on Vimeo.

A ‘deepfake’ of Zuckerberg was uploaded to Instagram and appears to show him delivering an ominous message

The issue of “deepfakes” is big on the internet after someone posted a slowed down video of Nancy Pelosi to make her look drunk and then, after Facebook didn’t take it down a group posted a fake Zuckerberg video. See  Facebook refused to delete an altered video of Nancy Pelosi. Would the same rule apply to Mark Zuckerberg? This video was created by artists Posters and Howe and is part of a series

While the Pelosi video was a crude hack, the Zuckerberg video used AI technology from Canny AI, a company that has developed tools for replacing dialogue in video (which has legitimate uses in localization of educational content, for example.) The artists provided a voice actor with a script and then the AI trained on existing video of Zuckerberg and that of the voice actor to morph Zuckerberg’s facial movements to match the actor’s.

What is interesting is that the Zuckerberg video is part of an installation called Spectre with a number of deliberate fakes that were exhibited at  a venue associated with the Sheffield Doc|Fest. Spectre, as the name suggests, both suggests how our data can be used to create ghost media of us, but also reminds us playfully of that fictional criminal organization that haunted James Bond. We are now being warned that real, but spectral organizations could haunt our democracy, messing with elections anonymously.