Facial Recognition: What Happens When We’re Tracked Everywhere We Go?

When a secretive start-up scraped the internet to build a facial-recognition tool, it tested a legal and ethical limit — and blew the future of privacy in America wide open.

The New York Times has an in depth story about Clearview AI titled, Facial Recognition: What Happens When We’re Tracked Everywhere We Go? The story tracks the various lawsuits attempting to stop Clearview and suggests that Clearview may well win. They are gambling that scraping the web’s faces for their application, even if it violated terms of service, may be protected as free speech.

The story talks about the dangers of face recognition and how many of the algorithms can’t recognize people of colour as accurately which leads to more false positives where police end up arresting the wrong person. A broader worry is that this could unleash tracking at another scale.

There’s also a broader reason that critics fear a court decision favoring Clearview: It could let companies track us as pervasively in the real world as they already do online.

The arguments in favour of Clearview include the challenge that they are essentially doing to images what Google does to text searches. Another argument is that stopping face recognition enterprises would stifle innovation.

The story then moves on to talk about the founding of Clearview and the political connections of the founders (Thiel invested in Clearview too). Finally it talks about how widely available face recognition could affect our lives. The story quotes Alvaro Bedoya who started a privacy centre,

“When we interact with people on the street, there’s a certain level of respect accorded to strangers,” Bedoya told me. “That’s partly because we don’t know if people are powerful or influential or we could get in trouble for treating them poorly. I don’t know what happens in a world where you see someone in the street and immediately know where they work, where they went to school, if they have a criminal record, what their credit score is. I don’t know how society changes, but I don’t think it changes for the better.”

It is interesting to think about how face recognition and other technologies may change how we deal with strangers. Too much knowledge could be alienating.

The story closes by describing how Clearview AI helped identify some of the Capitol rioters. Of course it wasn’t just Clearview, but also a citizen investigators who named and shamed people based on photos released.

260,000 Words, Full of Self-Praise, From Trump on the Virus

The New York Times has a nice content analysis study of Trump’s Coronavirus briefings, 260,000 Words, Full of Self-Praise, From Trump on the Virus. They tagged the corpus for different types of utterances including:

  • Self-congratulations
  • Exaggerations and falsehoods
  • Displays of empathy or appeals to national unity
  • Blaming others
  • Credits others

Needless to say they found he spent a fair amount of time congratulating himself.

They then created a neat visualizations with colour coded sections showing where he shows empathy or congratulates himself.

According to the article they looked at 42 briefings or other remarks from March 9 to April 17, 2020 giving them a total of 260,000 words.

I decided to replicate their study with Voyant and I gathered 29 Coronavirus Task Force Briefings (and one Press Conference) from February 29 to April 17. These are all the Task Force Briefings I could find at the White House web site. The corpus has 418,775 words, but those include remarks by people other than Trump, questions, and metadata.

Some of the things that struck me are the absence of medical terminology in the high frequency words. I was also intrigued by the prominence of “going to”. Trump spends a fair amount of time talking about what he and others are going to be doing rather than what is done. Here you have a Contexts panel from Voyant.

Welcome to Dialogica: Thinking-Through Voyant!

Do you need online teaching ideas and materials? Dialogica was supposed to be a text book, but instead we are adapting it for use in online learning and self-study. It is shared here under a CC BY 4.0 license so you can adapt as needed.

Stéfan Sinclair and I have put up a web site with tutorial materials for learning Voyant. See Dialogi.ca: Thinking-Through Voyant!.

Dialogica (http://dialogi.ca) plays with the idea of learning through a dialogue. A dialogue with the text; a dialogue mediated by the tool; and a dialogue with instructors like us.

Dialogica is made up of a set of tutorials that students should be able to alone or with minimal support. These are Word documents that you (instructors) can edit to suit your teaching and we are adding to them. We have added a gloss of teaching notes. Later we plan to add Spyral notebooks that go into greater detail on technical subjects, including how to program in Spyral.

Dialogica is made available with a CC BY 4.0 license so you can do what you want with it as long as you give us some sort of credit.

Show and Tell at CHRIN


Stéphane Pouyllau’s photo of me presenting

Michael Sinatra invited me to a “show and tell” workshop at the new Université de Montréal campus where they have a long data wall. Sinatra is the Director of CRIHN (Centre de recherche interuniversitaire sur les humanitiés numériques) and kindly invited me to show what I am doing with Stéfan Sinclair and to see what others at CRIHN and in France are doing.

Continue reading Show and Tell at CHRIN

The Illusionistic Magic of Geometric Figuring

the purpose aimed at by Mantegna and Pozzo was not so much “to simulate stereopsis”—the process by which we see depth—but rather to achieve “a simulation of the perceptual effect of stereoptic vision.” Far from being visual literalists, these painters were literal illusionists—their aim was to make their audiences see something that wasn’t there.

CABINET has a nice essay by Margaret Wertheim connecting Bacon to Renaissance perspective to video games, The Illusionistic Magic of Geometric Figuring. Wertheim argues that starting with Roger Bacon there was a growing interest in the psychological power of virtual representation. Artists starting with Giotto in Assisi the Mantegna and later Pozzo created ever more perspectival representations that were seen as stunning at the time. (Pozzo painted the ceiling of St. Ignatius Being Received into Heaven in Sant’Ignazio di Loyola a Campo Marzio, Rome.)

The frescos in Assisi heralded a revolution both in representation and in metaphysical leaning whose consequences for Western art, philosophy, and science can hardly be underestimated. It is here, too, that we may locate the seed of the video gaming industry. Bacon was giving voice to an emerging view that the God of Judeo-Christianity had created the world according to geometric laws and that Truth was thus to be found in geometrical representation. This Christian mathematicism would culminate in the scientific achievements of Galileo and Newton four centuries later…

Wertheim connects this to the ever more immersive graphics of the videogame industry. Sometimes I forget just how far the graphics have come from the first immersive games I played like Myst. Whatever else some games do, they are certainly visually powerful. It often seems a shame to have to go on a mission rather than just explore the world represented.

Facebook refused to delete an altered video of Nancy Pelosi. Would the same rule apply to Mark Zuckerberg?

‘Imagine this for a second…’ (2019) from Bill Posters on Vimeo.

A ‘deepfake’ of Zuckerberg was uploaded to Instagram and appears to show him delivering an ominous message

The issue of “deepfakes” is big on the internet after someone posted a slowed down video of Nancy Pelosi to make her look drunk and then, after Facebook didn’t take it down a group posted a fake Zuckerberg video. See  Facebook refused to delete an altered video of Nancy Pelosi. Would the same rule apply to Mark Zuckerberg? This video was created by artists Posters and Howe and is part of a series

While the Pelosi video was a crude hack, the Zuckerberg video used AI technology from Canny AI, a company that has developed tools for replacing dialogue in video (which has legitimate uses in localization of educational content, for example.) The artists provided a voice actor with a script and then the AI trained on existing video of Zuckerberg and that of the voice actor to morph Zuckerberg’s facial movements to match the actor’s.

What is interesting is that the Zuckerberg video is part of an installation called Spectre with a number of deliberate fakes that were exhibited at  a venue associated with the Sheffield Doc|Fest. Spectre, as the name suggests, both suggests how our data can be used to create ghost media of us, but also reminds us playfully of that fictional criminal organization that haunted James Bond. We are now being warned that real, but spectral organizations could haunt our democracy, messing with elections anonymously.

We Built a (Legal) Facial Recognition Machine for $60

The law has not caught up. In the United States, the use of facial recognition is almost wholly unregulated.

The New York Times has an opinion piece by Sahil Chinoy on how (they) We Built a (Legal) Facial Recognition Machine for $60. They describe an inexpensive experiment they ran where they took footage of people walking past some cameras installed in Bryant Park and compared them to known people who work in the area (scraped from web sites of organizations that have offices in the neighborhood.) Everything they did used public resources that others could use. The cameras stream their footage here. Anyone can scrape the images. The image database they gathered came from public web sites. The software is a service (Amazon’s Rekognition?) The article asks us to imagine the resources available to law enforcement.

I’m intrigued by how this experiment by the New York Times. It is a form of design thinking where they have designed something to help us understand the implications of a technology rather than just writing about what others say. Or we could say it is a form of journalistic experimentation.

Why does facial recognition spook us? Is recognizing people something we feel is deeply human? Or is it the potential for recognition in all sorts of situations. Do we need to start guarding our faces?

Facial recognition is categorically different from other forms of surveillance, Mr. Hartzog said, and uniquely dangerous. Faces are hard to hide and can be observed from far away, unlike a fingerprint. Name and face databases of law-abiding citizens, like driver’s license records, already exist. And for the most part, facial recognition surveillance can be set up using cameras already on the streets.

This is one of a number of excellent articles by the New York Times that is part of their Privacy Project.

JSTOR Text Analyzer

JSTOR, and some other publishers of electronic research, have started building text analysis tools into their publishing tools. I came across this at the end of a JSTOR article where there was a link to “Get more results on Text Analyzer” which leads to a beta of the JSTOR labs Text Analyzer environment.

JSTOR Labs Text Analyzer

This analyzer environment provides simple an analytical tools for surveying an issue of a journal or article. The emphasis is on extracting keywords and entities so that one can figure out if an article or journal is useful. One can use this to find other similar things.

Results of Text Analyzer

What intrigues me is this embedding of tools into reading environments which is different from the standard separate data and tools model. I wonder how we could instrument Voyant so that it could be more easily embedded in other environments.

Peter Robinson, “Textual Communities: A Platform for Collaborative Scholarship on Manuscript Heritages”

Peter Robinson gave a talk on “Textual Communities: A Platform for Collaborative Scholarship on Manuscript Heritages” as part of the Singhmar Guest Speaker Program | Faculty of Arts.

He started by talking about whether textual traditions had any relationship to the material world. How do texts relate to each other?

Today stemata as visualizations are models that go beyond the manuscripts themselves to propose evolutionary hypotheses in visual form.

He then showed what he is doing with the Canterbury Tales Project and then talked about the challenges adapting the time-consuming transcription process to other manuscripts. There are lots of different transcription systems, but few that handle collation. There is also the problem of costs and involving a distributed network of people.

He then defined text:

A text is an act of (human) communication that is inscribed in a document.

I wondered how he would deal with Allen Renear’s argument that there are Real Abstract Objects which, like Platonic Forms are real, but have no material instance. When we talk, for example, of “hamlet” we aren’t talking about a particular instance, but an abstract object. Likewise with things like “justice”, “history,” and “love.” Peter responded that the work doesn’t exist except as its instances.

He also mentioned that this is why stand-off markup doesn’t work because texts aren’t a set of linear objects. It is better to represent it as a tree of leaves.

So, he launched Textual Communities – https://textualcommunities.org/

This is a distributed editing system that also has collation.

Skip the bus: this post-apocalyptic jaunt is the only New York tour you’ll ever need

Operation Jane Walk appropriates the hallmarks of an action roleplaying game – Tom Clancy’s The Division (2016), set in a barren New York City after a smallpox pandemic – for an intricately rendered tour that digs into the city’s history through virtual visits to some notable landmarks. Bouncing from Stuyvesant Town to the United Nations Headquarters and down the sewers, a dry-witted tour guide makes plain how NYC was shaped by the Second World War, an evolving economy and the ideological jousting between urban theorists such as Robert Moses and Jane Jacobs. Between stops, the guide segues into musical interludes and poetic musings, but doesn’t let us forget the need to brandish a weapon for self-defence. The result is a highly imaginative film that interrogates the increasingly thin lines between real and digital worlds – but it’s also just a damn good time.

Aeon has a great tour of New York using Tom Clancy’s The Division, Skip the bus: this post-apocalyptic jaunt is the only New York tour you’ll ever need. It looks like someone actually gives tours this way – a new form of urban tourism. What other cities could one do?