What Sky Bet, The Gambling App, Knows About You

Sky Bet, the most popular one in Britain, compiled extensive records about a user, tracking him in ways he never imagined.

The New York Times has a good story about What Sky Bet, The Gambling App, Knows About You. It talks about the profile that Sky Bet in the UK built on a customer who had an addiction problem with gambling.

The company, or one of the data providers it had hired to collect information about users, had access to banking records, mortgage details, location coordinates, and an intimate portrait of his habits wagering on slots and soccer matches.

We tend to focus on what the big guys have and forget all the lesser known information aggregators and middlemen who buy and sell data. This story also provides an example of how valuable data can be to a business like online gambling that wants to attract the clients who are likely to get addicted to gambling.

Facial Recognition: What Happens When We’re Tracked Everywhere We Go?

When a secretive start-up scraped the internet to build a facial-recognition tool, it tested a legal and ethical limit — and blew the future of privacy in America wide open.

The New York Times has an in depth story about Clearview AI titled, Facial Recognition: What Happens When We’re Tracked Everywhere We Go? The story tracks the various lawsuits attempting to stop Clearview and suggests that Clearview may well win. They are gambling that scraping the web’s faces for their application, even if it violated terms of service, may be protected as free speech.

The story talks about the dangers of face recognition and how many of the algorithms can’t recognize people of colour as accurately which leads to more false positives where police end up arresting the wrong person. A broader worry is that this could unleash tracking at another scale.

There’s also a broader reason that critics fear a court decision favoring Clearview: It could let companies track us as pervasively in the real world as they already do online.

The arguments in favour of Clearview include the challenge that they are essentially doing to images what Google does to text searches. Another argument is that stopping face recognition enterprises would stifle innovation.

The story then moves on to talk about the founding of Clearview and the political connections of the founders (Thiel invested in Clearview too). Finally it talks about how widely available face recognition could affect our lives. The story quotes Alvaro Bedoya who started a privacy centre,

“When we interact with people on the street, there’s a certain level of respect accorded to strangers,” Bedoya told me. “That’s partly because we don’t know if people are powerful or influential or we could get in trouble for treating them poorly. I don’t know what happens in a world where you see someone in the street and immediately know where they work, where they went to school, if they have a criminal record, what their credit score is. I don’t know how society changes, but I don’t think it changes for the better.”

It is interesting to think about how face recognition and other technologies may change how we deal with strangers. Too much knowledge could be alienating.

The story closes by describing how Clearview AI helped identify some of the Capitol rioters. Of course it wasn’t just Clearview, but also a citizen investigators who named and shamed people based on photos released.

GameStop, AMC and the Stock Market’s Wild Ride This Week

GameStop Stock Price from Monday to Friday

Here’s what happened when investors using apps like Robinhood began wagering on a pool of unremarkable stocks.

We’ve all been following the story about GameStop, AMC and the Stock Market’s Wild Ride This Week. The story has a nice David and Goliath side where amateur traders stick it to the big Wall Street bullies, but it is also about the random power of internet-enabled crowds.

Continue reading GameStop, AMC and the Stock Market’s Wild Ride This Week

Why Automation is Different this Time

How is computerization affecting work and how might AI accelerate change? Erin pointed me to Kurzgesagt – In a Nutshell a series of videos that explain things “in a nutshell” produced by Kurzgesagt, a German information design firm. They have a video (see above) on The Rise of Machines that nicely explains why automation is improving productivity while not increasing the number of jobs. If anything, automation driven by AI seems to be polarizing the market for human work into high-end cognitive jobs and low-end service jobs.

The Whiteness of AI

This paper focuses on the fact that AI is predominantly portrayed as white—in colour, ethnicity, or both. We first illustrate the prevalent Whiteness

The Whiteness of AI” was mentioned in an online panel following The State of AI Ethics report (October 2020) from the Montreal AI Ethics Institute. This article starts from the observation that if you search Google images for “robot” or “AI” you get predominately images of white (or blue) entities. (Go ahead and try it.) From there it moves to the tendency of “White people; and the persistent tendency of members of that group, who dominate the academy in the US and Europe, to refuse to see themselves as racialised or race as a matter of concern at all.” (p. 686)

The paper then proposes three theories about the whiteness of AI to make it strange and to challenge the myth of colour-blindness that many of us in technology related fields live in. Important reading!

Freedom Online Coalition joint statement on artificial intelligence

The Freedom Online Coalition (FOC) has issued a joint statement on artificial intelligence (AI) and human rights.  While the FOC acknowledges that AI systems offer unprecedented opportunities for human development and innovation, the Coalition expresses concern over the documented and ongoing use of AI systems towards repressive and authoritarian purposes, including through facial recognition technology […]

The Freedom Online Coalition is a coalition of countries including Canada that “work closely together to coordinate their diplomatic efforts and engage with civil society and the private sector to support Internet freedom – free expression, association, assembly, and privacy online – worldwide.” It was founded in 2011 at the initiative of the Dutch.

FOC has just released Joint Statement on Artificial Intelligence and Human Rights that calls for “transparency, traceability and accountability” in the design and deployment of AI systems. They also reaffirm that “states must abide by their obligations under international human rights law to ensure that human rights are fully respected and protected.” The statement ends with a series of recommendations or “Calls to action”.

What is important about this statement is the role of the state recommended. This is not a set of vapid principles that developers should voluntarily adhere to. It calls for appropriate legislation.

States should consider how domestic legislation, regulation and policies can identify, prevent, and mitigate risks to human rights posed by the design, development and use of AI systems, and take action where appropriate. These may include national AI and data strategies, human rights codes, privacy laws, data protection measures, responsible business practices, and other measures that may protect the interests of persons or groups facing multiple and intersecting forms of discrimination.

I note that yesterday the Liberals introduced a Digital Charter Implementation Act that could significantly change the regulations around data privacy. More on that as I read about it.

Thanks to Florence for pointing this FOC statement out to me.

Why basing universities on digital platforms will lead to their demise – Infolet

I’m republishing here a blog essay originally in Italian that Domenico Fiormonte posted on Infolet that is worth reading,

Why basing universities on digital platforms will lead to their demise

By Domenico Fiormonte

(All links removed. They can be found in the original post – English Translation by Desmond Schmidt)

A group of professors from Italian universities have written an open letter on the consequences of using proprietary digital platforms in distance learning. They hope that a discussion on the future of education will begin as soon as possible and that the investments discussed in recent weeks will be used to create a public digital infrastructure for schools and universities.


Dear colleagues and students,

as you already know, since the COVID-19 emergency began, Italian schools and universities have relied on proprietary platforms and tools for distance learning (including exams), which are mostly produced by the “GAFAM” group of companies (Google, Apple, Facebook, Microsoft and Amazon). There are a few exceptions, such as the Politecnico di Torino, which has adopted instead its own custom-built solutions. However, on July 16, 2020 the European Court of Justice issued a very important ruling, which essentially says that US companies do not guarantee user privacy in accordance with the European General Data Protection Regulation (GDPR). As a result, all data transfers from the EU to the United States must be regarded as non-compliant with this regulation, and are therefore illegal.

A debate on this issue is currently underway in the EU, and the European Authority has explicitly invited “institutions, offices, agencies and organizations of the European Union to avoid transfers of personal data to the United States for new procedures or when securing new contracts with service providers.” In fact the Irish Authority has explicitly banned the transfer of Facebook user data to the United States. Finally, some studies underline how the majority of commercial platforms used during the “educational emergency” (primarily G-Suite) pose serious legal problems and represent a “systematic violation of the principles of transparency.”

In this difficult situation, various organizations, including (as stated below) some university professors, are trying to help Italian schools and universities comply with the ruling. They do so in the interests not only of the institutions themselves, but also of teachers and students, who have the right to study, teach and discuss without being surveilled, profiled and catalogued. The inherent risks in outsourcing teaching to multinational companies, who can do as they please with our data, are not only cultural or economic, but also legal: anyone, in this situation, could complain to the privacy authority to the detriment of the institution for which they are working.

However, the question goes beyond our own right, or that of our students, to privacy. In the renewed COVID emergency we know that there are enormous economic interests at stake, and the digital platforms, which in recent months have increased their turnover (see the study published in October by Mediobanca), now have the power to shape the future of education around the world. An example is what is happening in Italian schools with the national “Smart Class” project, financed with EU funds by the Ministry of Education. This is a package of “integrated teaching” where Pearson contributes the content for all the subjects, Google provides the software, and the hardware is the Acer Chromebook. (Incidentally, Pearson is the second largest publisher in the world, with a turnover of more than 4.5 billion euros in 2018.) And for the schools that join, it is not possible to buy other products.

Finally, although it may seem like science fiction, in addition to stabilizing proprietary distance learning as an “offer”, there is already talk of using artificial intelligence to “support” teachers in their work.

For all these reasons, a group of professors from various Italian universities decided to take action. Our initiative is not currently aimed at presenting an immediate complaint to the data protection officer, but at avoiding it, by allowing teachers and students to create spaces for discussion and encourage them to make choices that combine their freedom of teaching with their right to study. Only if the institutional response is insufficient or absent, we will register, as a last resort, a complaint to the national privacy authority. In this case the first step will be to exploit the “flaw” opened by the EU court ruling to push the Italian privacy authority to intervene (indeed, the former President, Antonello Soro, had already done so, but received no response). The purpose of these actions is certainly not to “block” the platforms that provide distance learning and those who use them, but to push the government to finally invest in the creation of a public infrastructure based on free software for scientific communication and teaching (on the model of what is proposed here and
which is already a reality for example in France, Spain and other European countries).

As we said above, before appealing to the national authority, a preliminary stage is necessary. Everyone must write to the data protection officer (DPO) requesting some information (attached here is the facsimile of the form for teachers we have prepared). If no response is received within thirty days, or if the response is considered unsatisfactory, we can proceed with the complaint to the national authority. At that point, the conversation will change, because the complaint to the national authority can be made not only by individuals, but also by groups or associations. It is important to emphasize that, even in this avoidable scenario, the question to the data controller is not necessarily a “protest” against the institution, but an attempt to turn it into a better working and study environment for everyone, conforming to European standards.

Creating ethical AI from Indigenous perspectives | Folio

Last week KIAS, AI 4 Society and SKIPP jointly sponsored Jason Lewis presenting on “Reflections on the Indigenous Protocol & Artificial Intelligence Position Paper”.

Prof. Jason Edward Lewis led the Indigenous Protocol and Artificial Intelligence Working Group in providing a starting place for those who want to design and create AI from an ethical position that centres Indigenous perspectives. Dr. Maggie Spivey- Faulkner provided a response.

Lewis talked about the importance of creative explorations from indigenous people experimenting with AI.

The Folio has published a short story on the talk, Creating ethical AI from Indigenous perspectives. The video should be up soon.

Why Uber’s business model is doomed

Like other ridesharing companies, it made a big bet on an automated future that has failed to materialise, says Aaron Benanav, a researcher at Humboldt University

Aaron Benanav has an important opinion piece in The Guardian about Why Uber’s business model is doomed. Benanav argues that Uber and Lyft’s business model is to capture market share and then ditch the drivers they have employed for self-driving cars as they become reliable. In other words they are first disrupting the human taxi services so as to capitalize on driverless technology when it comes. Their current business is losing money as they feast on venture capital to get market share and if they can’t make the switch to driverless it is likely they go bankrupt.

This raises the question of whether we will see driverless technology good enough to oust the human drivers? I suspect that we will see it for certain geo-fenced zones where Uber and Lyft can pressure local governments to discipline the streets so as to be safe for driverless. In countries with chaotic and hard to accurately map streets (think medieval Italian towns) it may never work well enough.

All of this raises the deeper ethical issue of how driverless vehicles in particular and AI in general are being imagined and implemented. While there may be nothing unethical about driverless cars per se, there IS something unethical about a company deliberately bypassing government regulations, sucking up capital, driving out the small human taxi businesses, all in order to monopolize a market that they can then profit on by firing the drivers that got them there for driverless cars. Why is this the way AI is being commercialized rather than trying to create better public transit systems or better systems for helping people with disabilities? Who do we hold responsible for the decisions or lack of decisions that sees driverless AI technology implemented in a particularly brutal and illegal fashion. (See Benanav on the illegality of what Uber and Lyft are doing by forcing drivers to be self-employed contractors despite rulings to the contrary.)

It is this deeper set of issues around the imagination, implementation, and commercialization of AI that needs to be addressed. I imagine most developers won’t intentionally create unethical AIs, but many will create cool technologies that are commercialized by someone else in brutal and disruptive ways. Those commercializing and their financial backers (which are often all of us and our pension plans) will also feel no moral responsibility because we are just benefiting from (mostly) legal innovative businesses. Corporate social responsibility is a myth. At most corporate ethics is conceived of as a mix of public relations and legal constraints. Everything else is just fair game and the inevitable disruptions in the marketplace. Those who suffer are losers.

This then raises the issue of the ethics of anticipation. What is missing is imagination, anticipation and planning. If the corporate sector is rewarded for finding ways to use new technologies to game the system, then who is rewarded for planning for the disruption and, at a minimum, lessening the impact on the rest of us? Governments have planning units like city planning units, but in every city I’ve lived in these units are bypassed by real money from developers unless there is that rare thing – a citizen’s revolt. Look at our cities and their spread – despite all sorts of research and a history of spread, there is still very little discipline or planning to constrain the developers. In an age when government is seen as essentially untrustworthy planning departments start from a deficit of trust. Companies, entrepreneurs, innovation and yes, even disruption, are blessed with innocence as if, like children, they just do their thing and can’t be expected to anticipate the consequences or have to pick up after their play. We therefore wait for some disaster to remind everyone of the importance of planning and systems of resilience.

Now … how can teach this form of deeper ethics without sliding into political thought?

Automatic grading and how to game it

Edgenuity involves short answers graded by an algorithm, and students have already cracked it

The Verge has a story on how students are figuring out how to game automatic marking systems like Edgenuity. The story is titled, These students figured out their tests were graded by AI — and the easy way to cheat. The story describes a keyword salad approach where you just enter a list of words that the grader may be looking for. The grader doesn’t know whether what your wrote is legible or nonsense, it just looks for the right words. The students in turn get good as skimming the study materials for the keywords needed (or find lists shared by other students online.)

Perhaps we could build a tool called Edgenorance which you could feed the study materials to and it would generate the keyword list automatically. It could watch the lectures for you, do the speech recognition, then extract the relevant keywords based on the text of the question.

None of this should be surprising. Companies have been promoting algorithms that were probably word based for a while. The algorithm works if it is not understood and thus not gamed. Perhaps we will get AIs that can genuinely understand a short paragraph answer and assess it, but that will be close to an artificial general intelligence and such an AGI will change everything.