The Secretive Company That Might End Privacy as We Know It

“I’ve come to the conclusion that because information constantly increases, there’s never going to be privacy,” Mr. Scalzo said. “Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.”

The New York Times has an important story about Clearview AI, The Secretive Company That Might End Privacy as We Know It. Clearview, which is partly funded by Peter Thiel, scraped a number of social media sites for pictures of people and has developed an AI application that you can upload a picture to and it tries to recognize the person and show you their social media trail. They are then selling the service to police forces.

Needless to say, this is a disturbing use of face recognition for surveillance using our own social media. They are using public images that anyone of us could look at, but at a scale no person could handle. They are doing something that would almost be impossible to stop, even with legislation. What’s to stop the intelligence services of another country doing this (and more)? Perhaps privacy is no longer possible.

Continue reading The Secretive Company That Might End Privacy as We Know It

There are 2,373 squirrels in Central Park. I know because I helped count them

I volunteered for the first squirrel census in the city. Here’s what I learned, in a nutshell.

From Lauren Klein on Twitter I learned about a great New York Times article on  There are 2,373 squirrels in Central Park. I know because I helped count them. The article is by Denise Lau (Jan. 8, 2020.) As Klein points out, it is about the messiness of data collection. (Note that she has a book coming out on Data Feminism with Catherine D’Ignazio.)

In 2020, let’s stop AI ethics-washing and actually do something – MIT Technology Review

But talk is just that—it’s not enough. For all the lip service paid to these issues, many organizations’ AI ethics guidelines remain vague and hard to implement.

Thanks to Oliver I came across this call for an end to ethics-washing by artificial intelligence reporter Karen Hao in the MIT Technology Review, In 2020, let’s stop AI ethics-washing and actually do something The call echoes something I’ve been talking about – that we need to move beyond guidelines, lists of principles, and checklists.  She nicely talks about some of the initiatives to hold AI accountable that are taking place and what should happen. Read on if you want to see what I think we need.

Continue reading In 2020, let’s stop AI ethics-washing and actually do something – MIT Technology Review

The 100 Worst Ed-Tech Debacles of the Decade

With the end of the year there are some great articles showing up reflecting on debacles of the decade. One of my favorites is The 100 Worst Ed-Tech Debacles of the DecadeEd-Tech is one of those fields where over and over techies think they know better. Some of the debacles Watters discusses:

  • 3D Printing
  • The “Flipped Classroom” (Full disclosure: I sat on a committee that funded these.)
  • Op-Eds to ban laptops
  • Clickers
  • Stories about the end of the library
  • Interactive whiteboards
  • The K-12 Cyber Incident Map (Check it out here)
  • IBM Watson
  • The Year of the MOOC

This collection of 100 terrible ideas in instructional technology should be mandatory reading for all of us who have been keen on ed-tech. (And I am one who has develop ed-tech and oversold it.) Each item is a mini essay with links worth following.

From Facebook: An Update on Building a Global Oversight Board

We’re sharing the progress we’ve made in building a new organization with independent oversight over how Facebook makes decisions on content.

Brent Harris, Director of Governance and Global Affairs at Facebook has an interesting blog post that provides An Update on Building a Global Oversight Board (Dec. 12, 2019). Facebook is developing an independent Global Oversight Board which will be able to make decisions about content on Facebook.

I can’t help feeling that Facebook is still trying to avoid being a content company. Instead of admitting that parts of what they do matches what media content companies do, they want to stick to a naive, but convenient, view that Facebook is a technological facilitator and content comes from somewhere else. This, like the view that bias in AI is always in the data and not in the algorithms, allows the company to continue with the pretence that they are about clean technology and algorithms. All the old human forms of judgement will be handled by an independent GOB so Facebook doesn’t have to admit they might have a position on something.

What Facebook should do is admit that they are a media company and that they make decisions that influence what users see (or not.) They should do what newspapers do – embrace the editorial function as part of what it means to deal in content. There is still, in newspapers, an affectation to the separation between opinionated editorial and objective reporting functions, but it is one that is open for discussion. What Facebook is doing is a not-taking-responsibility, but sequestering of responsibility. This will allow Facebook to play innocent as wave after wave of fake news stories sweep through their system.

Still, it is an interesting response by a company that obviously wants to deal in news for the economic value, but doesn’t want to corrupted by it.

The weird, wonderful world of Y2K survival guides

The category amounted to a giant feedback loop in which the existence of Y2K alarmism led to more of the same.

Harry McCracken in Fast Company has a great article on The weird, wonderful world of Y2K survival guides: A look back (Dec. 13, 2019).The article samples some of the hype around the disruptive potential of the millenium. Particularly worrisome are the political aspects of the folly. People (again) predicted the fall of the government and the need to prepare for the ensuing chaos. (Why is it that some people look so forward to such collapse?)

Technical savvy didn’t necessarily inoculate an author against millennium-bug panic. Edward Yourdon was a distinguished software architect with plenty of experience relevant to the challenge of assessing the Y2K bug’s impact. His level of Y2K gloominess waxed and waned, but he was prone to declarations such as “my own personal Y2K plans include a very simple assumption: the government of the U.S., as we currently know it, will fall on 1/1/2000. Period.”

Interestingly, few people panicked despite all the predictions. Most people, went out and celebrated.

All of this should be a warning for those of us who are tempted to predict that artificial intelligence or social media will lead to some sort of disaster. There is an ethics to predicting ethical disruption. Disruption, almost by definition, never happens as you thought it would.

Slaughterbots

On the Humanist discussion list John Keating recommended the short video Slaughterbots that presents a plausible scenario where autonomous drones are used to target dissent using social media data. Watch it! It is well done and presents real issues in a credible short video.

While the short is really about autonomous weapons and the need to ban them, I note that one of ideas included is that dissent could be silenced by using social media to target people. The scenario imagines that university students who shared a dissenting video on social media have their data harvested (including images of their faces) and the drones target them using face recognition. Science fiction, but suggestive of how social media presence can be used for control.

Continue reading Slaughterbots

The war on (unwanted) dick pics has begun

Legislators and tech companies are finally working to protect women from receiving unwanted sexually explicit images online – will it work?

The war on (unwanted) dick pics has begun according to a Guardian article about a web developer who asked people to send her “dick pics” so she could train a machine to recognize them and deal with them. The Guardian rightly asks why tech-companies don’t provide more tools for users to deal with harassing messages.

The interesting thing is how many women get them (53% get lewd images) and how many men have sent one (27% of millenial men). (Data from Pew Online Harassment 2017 and the I’ll Show You Mine study.)

Continue reading The war on (unwanted) dick pics has begun

Of Course Citizens Should Be Allowed to Kick Robots | WIRED

Seen in the wild, robots often appear cute and nonthreatening. This doesn’t mean we shouldn’t be hostile.

Wired magazine has a nice short piece that suggests, Of Course Citizens Should Be Allowed to Kick Robots. In some ways the point of the essay is not how to treat robots, but whether we should tolerate these surveillance robots on our sidewalks.

Blanket no-punching policies are useless in a world full of terrible people with even worse ideas. That’s even more true in a world where robots now do those people’s bidding. Robots, like people, are not all the same. While K5 can’t threaten bodily harm, the data it collects can cause real problems, and the social position it puts us all in—ever suspicious, ever watched, ever worried about what might be seen—is just as scary. At the very least, it’s a relationship we should question, not blithely accept as K5 rolls by.

The question, “Is it OK to kick a robot?” is a good one that nicely brings the ethics of human-robot co-existence down to earth and onto our side walks. What sort of respect do the proliferation of robots and scooters deserve? How should we treat these stupid things when enter our everyday spaces?

CIFAR Amii Summer Institute On AI And Society

Last week I attended the CIFAR and Amii Summer Institute on AI and Society. This brought together a group of faculty and new scholars to workshop ideas about AI, Ethics and Society. You can see  conference notes here on philosophi.ca. Some of the interventions that struck me included:

  • Rich Sutton talked about how AI is a way of trying to understand what it is to be human. He defined intelligence as the ability to achieve goals in the world. Reinforcement learning is a form of machine learning configured to achieve autonomous AI and is therefore more ambitious and harder, but also will get us closer to intelligence. RL uses value functions to map states to values; bots then try to maximize rewards (states that have value). It is a simplistic, but powerful idea about intelligence.
  • Jason Millar talked about autonomous vehicles and how right now mobility systems like Google Maps have only one criteria for planning a route for you, namely time to get there. He asked what it would look like to have other criteria like how difficult the driving would be, or the best view, or the least bumps. He wants the mobility systems being developed to be open to different values. These systems will become part of our mobility infrastructure.

After a day of talks, during which I gave a talk about the history of discussions about autonomy, we had a day and a half of workshops where groups formed and developed things. I was part of a team that developed a critique of the EU Guidelines for Trustworthy AI.