Facial Recognition: What Happens When We’re Tracked Everywhere We Go?

When a secretive start-up scraped the internet to build a facial-recognition tool, it tested a legal and ethical limit — and blew the future of privacy in America wide open.

The New York Times has an in depth story about Clearview AI titled, Facial Recognition: What Happens When We’re Tracked Everywhere We Go? The story tracks the various lawsuits attempting to stop Clearview and suggests that Clearview may well win. They are gambling that scraping the web’s faces for their application, even if it violated terms of service, may be protected as free speech.

The story talks about the dangers of face recognition and how many of the algorithms can’t recognize people of colour as accurately which leads to more false positives where police end up arresting the wrong person. A broader worry is that this could unleash tracking at another scale.

There’s also a broader reason that critics fear a court decision favoring Clearview: It could let companies track us as pervasively in the real world as they already do online.

The arguments in favour of Clearview include the challenge that they are essentially doing to images what Google does to text searches. Another argument is that stopping face recognition enterprises would stifle innovation.

The story then moves on to talk about the founding of Clearview and the political connections of the founders (Thiel invested in Clearview too). Finally it talks about how widely available face recognition could affect our lives. The story quotes Alvaro Bedoya who started a privacy centre,

“When we interact with people on the street, there’s a certain level of respect accorded to strangers,” Bedoya told me. “That’s partly because we don’t know if people are powerful or influential or we could get in trouble for treating them poorly. I don’t know what happens in a world where you see someone in the street and immediately know where they work, where they went to school, if they have a criminal record, what their credit score is. I don’t know how society changes, but I don’t think it changes for the better.”

It is interesting to think about how face recognition and other technologies may change how we deal with strangers. Too much knowledge could be alienating.

The story closes by describing how Clearview AI helped identify some of the Capitol rioters. Of course it wasn’t just Clearview, but also a citizen investigators who named and shamed people based on photos released.

GameStop, AMC and the Stock Market’s Wild Ride This Week

GameStop Stock Price from Monday to Friday

Here’s what happened when investors using apps like Robinhood began wagering on a pool of unremarkable stocks.

We’ve all been following the story about GameStop, AMC and the Stock Market’s Wild Ride This Week. The story has a nice David and Goliath side where amateur traders stick it to the big Wall Street bullies, but it is also about the random power of internet-enabled crowds.

Continue reading GameStop, AMC and the Stock Market’s Wild Ride This Week

What was Gamergate? The lessons we still haven’t learned

Gamergate should have armed us against bad actors and bad-faith arguments. It didn’t.

Vox has an important article connecting the storming of the US Capitol with Gamergate, What was Gamergate? The lessons we still haven’t learned.  The point is that Gamergate and the storming are the visible symptoms of something deeper. I would go further and connect these with activities that progressives approve of like some of the Anonymous initiatives. For that matter, the recent populist retail investor campaign around stocks like GameStop has similar roots in new forms of organizing and new ironic ideologies.

Continue reading What was Gamergate? The lessons we still haven’t learned

Gather

Gather is a video-calling space that lets multiple people hold separate conversations in parallel, walking in and out of those conversations just as easily as they would in real life.

Kisha introduced me to Gather, a cross between Second Life and Zoom. If you have a Gather account you can create a space – your own little classroom with different gathering spots. People then move around these 8-bit animated spaces and when they are in hearing distance they can video conference. Users can also read posters put up, or documents left around, or watch videos created for a space. It actually looks like a nice type of space for a class to use as an alternative to Zoom.

Blogging your research: Tips for getting started

Curious about research blogging, but not sure where to start?

Alice Fleerackersand Lupin Battersby of the ScholCommLab have put together a good post on Blogging your research: Tips for getting started. Despite being committed to blogging (this blog has been going since 2003) I must admit that I’m not sure blogging has the impact it once had. Twitter seems to have replaced blogging as a way to quickly share and follow research. Blog platforms, like WordPress have become project news and promotion systems.

What few talk about is how blogging can be a way of journaling for oneself. My blog certainly serves as a form of memory by and for myself. If only I search it (which I often do when I’m looking for information about something I knew but forgot) then it is still useful. Does everything in academia have to be about promotion and public impact?

In this age of fake news we seem to be back in the situation that Socrates and Gorgias sparred about in Plato’s Gorgias. Gorgias makes the point that the orator or, in today’s terms the communications specialist, can be more convincing than the scholar because they know how to “communicate”.

Socrates: Then the case is the same in all the other arts for the orator and his rhetoric: there is no need to know [459c] the truth of the actual matters, but one merely needs to have discovered some device of persuasion which will make one appear to those who do not know to know better than those who know.

Gorgias: Well, and is it not a great convenience, Socrates, to make oneself a match for the professionals by learning just this single art and omitting all the others? (Gorgias 459a)

It certainly feels like today there is a positive distrust of expertise such that the blatant lie, if repeated often enough, can convince those who want to hear the lie. Does communicating about our research have the beneficial effect we hope it does? Or, does it inflate our bubble without touching that of others?

Freedom Online Coalition joint statement on artificial intelligence

The Freedom Online Coalition (FOC) has issued a joint statement on artificial intelligence (AI) and human rights.  While the FOC acknowledges that AI systems offer unprecedented opportunities for human development and innovation, the Coalition expresses concern over the documented and ongoing use of AI systems towards repressive and authoritarian purposes, including through facial recognition technology […]

The Freedom Online Coalition is a coalition of countries including Canada that “work closely together to coordinate their diplomatic efforts and engage with civil society and the private sector to support Internet freedom – free expression, association, assembly, and privacy online – worldwide.” It was founded in 2011 at the initiative of the Dutch.

FOC has just released Joint Statement on Artificial Intelligence and Human Rights that calls for “transparency, traceability and accountability” in the design and deployment of AI systems. They also reaffirm that “states must abide by their obligations under international human rights law to ensure that human rights are fully respected and protected.” The statement ends with a series of recommendations or “Calls to action”.

What is important about this statement is the role of the state recommended. This is not a set of vapid principles that developers should voluntarily adhere to. It calls for appropriate legislation.

States should consider how domestic legislation, regulation and policies can identify, prevent, and mitigate risks to human rights posed by the design, development and use of AI systems, and take action where appropriate. These may include national AI and data strategies, human rights codes, privacy laws, data protection measures, responsible business practices, and other measures that may protect the interests of persons or groups facing multiple and intersecting forms of discrimination.

I note that yesterday the Liberals introduced a Digital Charter Implementation Act that could significantly change the regulations around data privacy. More on that as I read about it.

Thanks to Florence for pointing this FOC statement out to me.

Why basing universities on digital platforms will lead to their demise – Infolet

I’m republishing here a blog essay originally in Italian that Domenico Fiormonte posted on Infolet that is worth reading,

Why basing universities on digital platforms will lead to their demise

By Domenico Fiormonte

(All links removed. They can be found in the original post – English Translation by Desmond Schmidt)

A group of professors from Italian universities have written an open letter on the consequences of using proprietary digital platforms in distance learning. They hope that a discussion on the future of education will begin as soon as possible and that the investments discussed in recent weeks will be used to create a public digital infrastructure for schools and universities.


Dear colleagues and students,

as you already know, since the COVID-19 emergency began, Italian schools and universities have relied on proprietary platforms and tools for distance learning (including exams), which are mostly produced by the “GAFAM” group of companies (Google, Apple, Facebook, Microsoft and Amazon). There are a few exceptions, such as the Politecnico di Torino, which has adopted instead its own custom-built solutions. However, on July 16, 2020 the European Court of Justice issued a very important ruling, which essentially says that US companies do not guarantee user privacy in accordance with the European General Data Protection Regulation (GDPR). As a result, all data transfers from the EU to the United States must be regarded as non-compliant with this regulation, and are therefore illegal.

A debate on this issue is currently underway in the EU, and the European Authority has explicitly invited “institutions, offices, agencies and organizations of the European Union to avoid transfers of personal data to the United States for new procedures or when securing new contracts with service providers.” In fact the Irish Authority has explicitly banned the transfer of Facebook user data to the United States. Finally, some studies underline how the majority of commercial platforms used during the “educational emergency” (primarily G-Suite) pose serious legal problems and represent a “systematic violation of the principles of transparency.”

In this difficult situation, various organizations, including (as stated below) some university professors, are trying to help Italian schools and universities comply with the ruling. They do so in the interests not only of the institutions themselves, but also of teachers and students, who have the right to study, teach and discuss without being surveilled, profiled and catalogued. The inherent risks in outsourcing teaching to multinational companies, who can do as they please with our data, are not only cultural or economic, but also legal: anyone, in this situation, could complain to the privacy authority to the detriment of the institution for which they are working.

However, the question goes beyond our own right, or that of our students, to privacy. In the renewed COVID emergency we know that there are enormous economic interests at stake, and the digital platforms, which in recent months have increased their turnover (see the study published in October by Mediobanca), now have the power to shape the future of education around the world. An example is what is happening in Italian schools with the national “Smart Class” project, financed with EU funds by the Ministry of Education. This is a package of “integrated teaching” where Pearson contributes the content for all the subjects, Google provides the software, and the hardware is the Acer Chromebook. (Incidentally, Pearson is the second largest publisher in the world, with a turnover of more than 4.5 billion euros in 2018.) And for the schools that join, it is not possible to buy other products.

Finally, although it may seem like science fiction, in addition to stabilizing proprietary distance learning as an “offer”, there is already talk of using artificial intelligence to “support” teachers in their work.

For all these reasons, a group of professors from various Italian universities decided to take action. Our initiative is not currently aimed at presenting an immediate complaint to the data protection officer, but at avoiding it, by allowing teachers and students to create spaces for discussion and encourage them to make choices that combine their freedom of teaching with their right to study. Only if the institutional response is insufficient or absent, we will register, as a last resort, a complaint to the national privacy authority. In this case the first step will be to exploit the “flaw” opened by the EU court ruling to push the Italian privacy authority to intervene (indeed, the former President, Antonello Soro, had already done so, but received no response). The purpose of these actions is certainly not to “block” the platforms that provide distance learning and those who use them, but to push the government to finally invest in the creation of a public infrastructure based on free software for scientific communication and teaching (on the model of what is proposed here and
which is already a reality for example in France, Spain and other European countries).

As we said above, before appealing to the national authority, a preliminary stage is necessary. Everyone must write to the data protection officer (DPO) requesting some information (attached here is the facsimile of the form for teachers we have prepared). If no response is received within thirty days, or if the response is considered unsatisfactory, we can proceed with the complaint to the national authority. At that point, the conversation will change, because the complaint to the national authority can be made not only by individuals, but also by groups or associations. It is important to emphasize that, even in this avoidable scenario, the question to the data controller is not necessarily a “protest” against the institution, but an attempt to turn it into a better working and study environment for everyone, conforming to European standards.

Creating ethical AI from Indigenous perspectives | Folio

Last week KIAS, AI 4 Society and SKIPP jointly sponsored Jason Lewis presenting on “Reflections on the Indigenous Protocol & Artificial Intelligence Position Paper”.

Prof. Jason Edward Lewis led the Indigenous Protocol and Artificial Intelligence Working Group in providing a starting place for those who want to design and create AI from an ethical position that centres Indigenous perspectives. Dr. Maggie Spivey- Faulkner provided a response.

Lewis talked about the importance of creative explorations from indigenous people experimenting with AI.

The Folio has published a short story on the talk, Creating ethical AI from Indigenous perspectives. The video should be up soon.

Virtual YouTubers get caught in the middle of a diplomatic spat

It’s relatively easy for those involved in the entertainment industry in Asia to get caught up in geopolitical scuffles, with with social media accelerating and magnifying any faux pas.

From the Japan Times I learned about how some hololive vTubers or Virtual YouTubers g[o]t caught in the middle of a diplomatic spat. The vTuber Kiryu Coco, who is apparently a young (3,500 years young) dragon, showed a visualization that mentioned Taiwan as different from China and therefore ticked off Chinese fans which led to hololive releasing apologies. Young dragons don’t yet know about the One-China policy. To make matters worse the apologies/explanations published in different countries were different which was noticed and that needed further explanation. Such are the dangers of trying to appeal to both the Chinese, Japanese and US markets.

Not knowing much about vTubers I poked around the hololive site. An interesting aspect of the English site is the information in the FAQ about what you can send or not send your favorite talent. Here is their list of things hololive will not accept from fans:

– ALL second hand/used/opened up items that do NOT directly deliver from e-commerce sites such as Amazon (excluding fan letters and message cards)
– Luxury items (individual items which cost more than 30,000 yen)
– Living beings or raw items (including fresh flowers, except flower stands for specified venues and events)
– Items requiring refrigeration
– Handmade items (excluding fan letters and message cards)
– All sorts of stuffed toys, dolls, cushions (no exceptions)
– Currencies (cash, gift cards, coupons, tickets, etc.)
– Cosmetics, perfumes, soap, medicines, etc.
– Dangerous goods (explosives, knives/weapons, drugs, imitation swords, model guns, etc.)
– Clothes, underwear (Scarves, gloves, socks, and blankets are OK)
– Amulets, talismans, charms (items related to religion, politics, or ideological expressions)
– Large items (sizes where the talents would find it impossible to carry home alone)
– Pet supplies
– Items that may violate public order and moral
– Items that may violate laws and regulations
– Additional items (the authorities will perform final confirmation and judgment)

I feel this list is a distant relative of Borges’ taxonomy of animals taken from the fictional Celestial Emporium of Benevolent Knowledge which includes such self-referential animals as “those included in this classification” and “et cetera.”

On a serious note, it is impressive how much these live vTubers can bring in. By some estimates Coco made USD $140,000 in July. The mix of anime characters and live streaming of game playing (see above) and other fun seems to be popular. While this phenomena may look like one of those weird Japan things, I suspect we are going to see more virtual characters especially if face and body tracking tools become easy to use. How could I teach online as a virtual character?

Leaving Humanist

I just read Dr. Bethan Tovey-Walsh’s post on her blog about why she is Leaving Humanist and it raises important issues. Willard McCarty, the moderator of Humanist, a discussion list going since 1987, allowed the posting of a dubious note that made claims about anti-white racism and then refused to publish rebuttals for fear that an argument would erupt. We know about this thanks to Twitter, where Tovey-Walsh tweeted about it. I should add that her reasoning is balanced and avoids calling names. Specifically she argued that,

If Gabriel’s post is allowed to sit unchallenged, this both suggests that such content is acceptable for Humanist, and leaves list members thinking that nobody else wished to speak against it. There are, no doubt, many list members who would not feel confident in challenging a senior academic, and some of those will be people of colour; it would be immoral to leave them with the impression that nobody cares to stand up on their behalf.

I think Willard needs to make some sort of public statement or the list risks being seen as a place where potentially racist ideas go uncommented.

August 11 Update: Willard McCarty has apologized and published some of the correspondence he received, including something from Tovey-Walsh. He ends with a proposal that he not stand in the way of the concerns voiced about racism, but he proposes a condition to expanded dialogue.

I strongly suggest one condition to this expanded scope, apart from care always to respect those with whom we disagree. That condition is relevance to digital humanities as a subject of enquiry. The connection between subject and society is, to paraphrase Kathy Harris (below), that algorithms are not pure, timelessly ideal, culturally neutral expressions but are as we are.