“I’ve come to the conclusion that because information constantly increases, there’s never going to be privacy,” Mr. Scalzo said. “Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.”
The New York Times has an important story about Clearview AI, The Secretive Company That Might End Privacy as We Know It. Clearview, which is partly funded by Peter Thiel, scraped a number of social media sites for pictures of people and has developed an AI application that you can upload a picture to and it tries to recognize the person and show you their social media trail. They are then selling the service to police forces.
Needless to say, this is a disturbing use of face recognition for surveillance using our own social media. They are using public images that anyone of us could look at, but at a scale no person could handle. They are doing something that would almost be impossible to stop, even with legislation. What’s to stop the intelligence services of another country doing this (and more)? Perhaps privacy is no longer possible.
From an ethical point of view it is useful to ask what exactly is wrong with the application?
Asked about the implications of bringing such a power into the world, Mr. Ton-That seemed taken aback.
“I have to think about that,” he said. “Our belief is that this is the best use of the technology.”
Clearview itself seems to know that their business model is likely to cause concern even if they feel they are making the best use of the technology by offering it to police services (as opposed to others.) The suggestion is that while such a service may be unethical, it’s going to happen anyway and they are making the ethically best possible use of it. Who could object to a technology that helps catch bad guys? They may be waiting for the culture to get used to used of face recognition. Once we get acclimatized and stop being appalled then we will stop trying to control face recognition, at which point they could roll out more public versions of the service. Who wouldn’t pay to have a tool that lets you check up discretely on people you’ve met.
Back to Clearview AI and the ethics of the service. One argument might be that company while providing law enforcement a service can also watch law enforcement for other purposes and even manipulate results. Kashmir Hill, the New York Times reporter who broke the story discovered that they began to monitor him when he started investigating them.
While the company was dodging me, it was also monitoring me. At my request, a number of police officers had run my photo through the Clearview app. They soon received phone calls from company representatives asking if they were talking to the media — a sign that Clearview has the ability and, in this case, the appetite to monitor whom law enforcement is searching for.
Clearview also, apparently, removed information about Hill when they realized he was writing about them. This was later blamed on a software bug.
There are also legal problems with scraping information off other sites. Facebook is apparently reviewing the situation and a later New York Times story reports that Twitter Tells Facial Recognition Trailblazer to Stop Using Site’s Photos. But what can an individual do?
I decided to do a Google Image search using a photo of my face that I had never put up on the web. I thought this would let me test what an open image search service could do and how that might be different from recognition. The results show that it is very different – search suggests searches and images similar to the image uploaded. I got lots of pictures of elderly white men with beards. The search suggestions were also rather amusing. Google suggests a possible related search could be for “gentleman.”
Anyway, this still leaves the issue as to what is unethical about Clearview’s service. Stay tuned for further thoughts.