Why Uber’s business model is doomed

Like other ridesharing companies, it made a big bet on an automated future that has failed to materialise, says Aaron Benanav, a researcher at Humboldt University

Aaron Benanav has an important opinion piece in The Guardian about Why Uber’s business model is doomed. Benanav argues that Uber and Lyft’s business model is to capture market share and then ditch the drivers they have employed for self-driving cars as they become reliable. In other words they are first disrupting the human taxi services so as to capitalize on driverless technology when it comes. Their current business is losing money as they feast on venture capital to get market share and if they can’t make the switch to driverless it is likely they go bankrupt.

This raises the question of whether we will see driverless technology good enough to oust the human drivers? I suspect that we will see it for certain geo-fenced zones where Uber and Lyft can pressure local governments to discipline the streets so as to be safe for driverless. In countries with chaotic and hard to accurately map streets (think medieval Italian towns) it may never work well enough.

All of this raises the deeper ethical issue of how driverless vehicles in particular and AI in general are being imagined and implemented. While there may be nothing unethical about driverless cars per se, there IS something unethical about a company deliberately bypassing government regulations, sucking up capital, driving out the small human taxi businesses, all in order to monopolize a market that they can then profit on by firing the drivers that got them there for driverless cars. Why is this the way AI is being commercialized rather than trying to create better public transit systems or better systems for helping people with disabilities? Who do we hold responsible for the decisions or lack of decisions that sees driverless AI technology implemented in a particularly brutal and illegal fashion. (See Benanav on the illegality of what Uber and Lyft are doing by forcing drivers to be self-employed contractors despite rulings to the contrary.)

It is this deeper set of issues around the imagination, implementation, and commercialization of AI that needs to be addressed. I imagine most developers won’t intentionally create unethical AIs, but many will create cool technologies that are commercialized by someone else in brutal and disruptive ways. Those commercializing and their financial backers (which are often all of us and our pension plans) will also feel no moral responsibility because we are just benefiting from (mostly) legal innovative businesses. Corporate social responsibility is a myth. At most corporate ethics is conceived of as a mix of public relations and legal constraints. Everything else is just fair game and the inevitable disruptions in the marketplace. Those who suffer are losers.

This then raises the issue of the ethics of anticipation. What is missing is imagination, anticipation and planning. If the corporate sector is rewarded for finding ways to use new technologies to game the system, then who is rewarded for planning for the disruption and, at a minimum, lessening the impact on the rest of us? Governments have planning units like city planning units, but in every city I’ve lived in these units are bypassed by real money from developers unless there is that rare thing – a citizen’s revolt. Look at our cities and their spread – despite all sorts of research and a history of spread, there is still very little discipline or planning to constrain the developers. In an age when government is seen as essentially untrustworthy planning departments start from a deficit of trust. Companies, entrepreneurs, innovation and yes, even disruption, are blessed with innocence as if, like children, they just do their thing and can’t be expected to anticipate the consequences or have to pick up after their play. We therefore wait for some disaster to remind everyone of the importance of planning and systems of resilience.

Now … how can teach this form of deeper ethics without sliding into political thought?

Automatic grading and how to game it

Edgenuity involves short answers graded by an algorithm, and students have already cracked it

The Verge has a story on how students are figuring out how to game automatic marking systems like Edgenuity. The story is titled, These students figured out their tests were graded by AI — and the easy way to cheat. The story describes a keyword salad approach where you just enter a list of words that the grader may be looking for. The grader doesn’t know whether what your wrote is legible or nonsense, it just looks for the right words. The students in turn get good as skimming the study materials for the keywords needed (or find lists shared by other students online.)

Perhaps we could build a tool called Edgenorance which you could feed the study materials to and it would generate the keyword list automatically. It could watch the lectures for you, do the speech recognition, then extract the relevant keywords based on the text of the question.

None of this should be surprising. Companies have been promoting algorithms that were probably word based for a while. The algorithm works if it is not understood and thus not gamed. Perhaps we will get AIs that can genuinely understand a short paragraph answer and assess it, but that will be close to an artificial general intelligence and such an AGI will change everything.

AI Dungeon

AI Dungeon, an infinitely generated text adventure powered by deep learning.

Robert told me about AI Dungeon, a text adventure system that uses GPT-2, a language model from OpenAI that got a lot of attention when it was “released” in 2019. OpenAI felt it was too good to release openly as it could be misused. Instead they released a toy version. Now they have GPT-3, about which I wrote before.

AI Dungeon allows you to choose the type of world you want to play in (fantasy, zombies …). It then generates an infinite game by basically generating responses to your input. I assume there is some memory as it repeats my name and the basic setting.

Replaying Japan 2020

Replaying Japan is an international conference dedicated to the study of Japanese video games. For the first time this year, the conference is held online and will combine various types of research contents (videos, texts, livestreams) on the theme of esport and competitive gaming in Japan.

This year the Replaying Japan conference was held online. The conference was originally going to be in Liège, Belgium at the Liège Game Lab. We were going to get to try Belgian fries and beer and learn more about the Game Lab. Alas, with the pandemic, the organizers had to pivot and organize an online conference. They did a great job using technologies like Twitch and Minecraft.

Keiji Amano, Tsugumi (Mimi) Okabe, and I had a paper on Ethics and Gaming: A Content Analysis of Annual Reports of the Japanese Game Industry presented by Prof. Amano. (To read the longer conferencer paper you need to have access to the conference materials, but they will be opening that up.) We looked at how major Japanese game companies frame ethical or CSR (corporate social responsibility) issues which is not how ethics is being discussed in the academy.

The two keynotes were both excellent in different ways. Florent Georges talked about First Steps of Japanese ESports. His talk introduced a number of important early video game competitions. 

Susana Tosca gave the closing keynote. She presented a nuanced and fascinating talk on Mediating the Promised Gameland (see video). She looked at how game tourists visit Japan and interviewed people about this phenomenon of content tourism. This was wrapped in reflections on methodology and tourism. Very interesting, though it raised some ethical issues about how we watch tourists. She was sensitive to the way that ethnographers are tourists of a sort and we need to be careful not to mock our subjects as we watch them. As someone who loves to travel and is therefore often a tourist, I’m probably sensitive on this issue.

Leaving Humanist

I just read Dr. Bethan Tovey-Walsh’s post on her blog about why she is Leaving Humanist and it raises important issues. Willard McCarty, the moderator of Humanist, a discussion list going since 1987, allowed the posting of a dubious note that made claims about anti-white racism and then refused to publish rebuttals for fear that an argument would erupt. We know about this thanks to Twitter, where Tovey-Walsh tweeted about it. I should add that her reasoning is balanced and avoids calling names. Specifically she argued that,

If Gabriel’s post is allowed to sit unchallenged, this both suggests that such content is acceptable for Humanist, and leaves list members thinking that nobody else wished to speak against it. There are, no doubt, many list members who would not feel confident in challenging a senior academic, and some of those will be people of colour; it would be immoral to leave them with the impression that nobody cares to stand up on their behalf.

I think Willard needs to make some sort of public statement or the list risks being seen as a place where potentially racist ideas go uncommented.

August 11 Update: Willard McCarty has apologized and published some of the correspondence he received, including something from Tovey-Walsh. He ends with a proposal that he not stand in the way of the concerns voiced about racism, but he proposes a condition to expanded dialogue.

I strongly suggest one condition to this expanded scope, apart from care always to respect those with whom we disagree. That condition is relevance to digital humanities as a subject of enquiry. The connection between subject and society is, to paraphrase Kathy Harris (below), that algorithms are not pure, timelessly ideal, culturally neutral expressions but are as we are.

OSS advise on how to sabotage organizations or conferences

On Twitter someone posted a link to a 1944 OSS Simple Sabotage Field Manual. This includes simple, but brilliant advice on how to sabotage organizations or conferences.

This sounds a lot like what we all do when we academics normally do as a matter of principle. I particularly like the advice to “Make ‘speeches.'” I imagine many will see themselves in their less cooperative moments in this list of actions or their committee meetings.

The OSS (Office of Strategic Services) was the US office that turned into the CIA.

Philosophers On GPT-3

GPT-3 raises many philosophical questions. Some are ethical. Should we develop and deploy GPT-3, given that it has many biases from its training, it may displace human workers, it can be used for deception, and it could lead to AGI? I’ll focus on some issues in the philosophy of mind. Is GPT-3 really intelligent, and in what sense? Is it conscious? Is it an agent? Does it understand?

On the Daily Nous (news by and for philosophers) there is a great collection of short essays on OpenAI‘s recently released API to GPT-3, see Philosophers On GPT-3 (updated with replies by GPT-3). And … there is a response from GPT-3. Some of the issues raised include:

Ethics: David Chalmers raises the inevitable ethics issues. Remember that GPT-2 was considered so good as to be dangerous. I don’t know if it is brilliant marketing or genuine concern, but OpenAI is continuing to treat this technology as something to be careful about. Here is Chalmers on ethics,

GPT-3 raises many philosophical questions. Some are ethical. Should we develop and deploy GPT-3, given that it has many biases from its training, it may displace human workers, it can be used for deception, and it could lead to AGI? I’ll focus on some issues in the philosophy of mind. Is GPT-3 really intelligent, and in what sense? Is it conscious? Is it an agent? Does it understand?

Annette Zimmerman in her essay makes an important point about the larger justice context of tools like GPT-3. It is not just a matter of ironing out the biases in the language generated (or used in training.) It is not a matter of finding a techno-fix that makes bias go away. It is about care.

Not all uses of AI, of course, are inherently objectionable, or automatically unjust—the point is simply that much like we can do things with words, we can do things with algorithms and machine learning models. This is not purely a tangibly material distributive justice concern: especially in the context of language models like GPT-3, paying attention to other facets of injustice—relational, communicative, representational, ontological—is essential.

She also makes an important and deep point that any AI application will have to make use of concepts from the application domain and all of these concepts will be contested. There are no simple concepts just as there are no concepts that don’t change over time.

Finally, Shannon Vallor has an essay that revisits Hubert Dreyfus’s critique of AI as not really understanding.

Understanding is beyond GPT-3’s reach because understanding cannot occur in an isolated behavior, no matter how clever. Understanding is not an act but a labor.

In the realm of paper tigers – exploring the failings of AI ethics guidelines

But even the ethical guidelines of the world’s largest professional association of engineers, IEEE, largely fail to prove effective as large technology companies such as Facebook, Google and Twitter do not implement them, notwithstanding the fact that many of their engineers and developers are IEEE members.

AlgorithmWatch is maintaining an inventory of frameworks and principles. Their evaluation is that these are not making much of a difference. See In the realm of paper tigers – exploring the failings of AI ethics guidelines. They also note there are few from the Global South. It seems to be mostly countries that have an AI industry where principles are being published.

The International Review of Information Ethics

The International Review of Information Ethics (IRIE) has just published Volume 28 which collects papers on Artificial Intelligence, Ethics and Society. This issue comes from the AI, Ethics and Society conference that the Kule Institute for Advanced Study (KIAS) organized.

This issue of the IRIE also marks the first issue published on the PKP platform managed by the University of Alberta Library. KIAS is supporting the transition of the journal over to the new platform as part of its focus on AI, Ethics and Society in partnership with the AI for Society signature area.

We are still ironing out all the bugs and missing links, so bear with us, but the platform is solid and the IRIE is now positioned to sustainably publish original research in this interdisciplinary area.

A Letter on Justice and Open Debate

The free exchange of information and ideas, the lifeblood of a liberal society, is daily becoming more constricted. While we have come to expect this on the radical right, censoriousness is also spreading more widely in our culture: an intolerance of opposing views, a vogue for public shaming and ostracism, and the tendency to dissolve complex policy issues in a blinding moral certainty. We uphold the value of robust and even caustic counter-speech from all quarters. But it is now all too common to hear calls for swift and severe retribution in response to perceived transgressions of speech and thought. More troubling still, institutional leaders, in a spirit of panicked damage control, are delivering hasty and disproportionate punishments instead of considered reforms. Editors are fired for running controversial pieces; books are withdrawn for alleged inauthenticity; journalists are barred from writing on certain topics; professors are investigated for quoting works of literature in class; a researcher is fired for circulating a peer-reviewed academic study; and the heads of organizations are ousted for what are sometimes just clumsy mistakes. 

Harper’s has published A Letter on Justice and Open Debate that is signed by all sorts of important people from Salman Rushdie, Margaret Atwood to J.K. Rowling. The letter is critical of what might be called “cancel culture.”

The letter itself has been critiqued for coming from privileged writers who don’t experience the daily silencing of racism or other forms of prejudice. See the Guardian Is free speech under threat from ‘cancel culture’? Four writers respond for different responses to the letter, both critical and supportive.

This issue doesn’t seem to me that new. We have been struggling for some time with issues around the tolerance of intolerance. There is a broad range of what is considered tolerable speech and, I think, everyone would agree that there is also intolerable speech that doesn’t merit airing and countering. The problem is knowing where the line is.

That which is missing on the internet is a sense of dialogue. Those who speechify (including me in blog posts like this) do so without entering into dialogue with anyone. We are all broadcasters; many without much of an audience. Entering into dialogue, by contrast, carries commitments to continue the dialogue, to listen, to respect and to work for resolution. In the broadcast chaos all you can do is pick the stations you will listen to and cancel the others.