AI Dungeon

AI Dungeon, an infinitely generated text adventure powered by deep learning.

Robert told me about AI Dungeon, a text adventure system that uses GPT-2, a language model from OpenAI that got a lot of attention when it was “released” in 2019. OpenAI felt it was too good to release openly as it could be misused. Instead they released a toy version. Now they have GPT-3, about which I wrote before.

AI Dungeon allows you to choose the type of world you want to play in (fantasy, zombies …). It then generates an infinite game by basically generating responses to your input. I assume there is some memory as it repeats my name and the basic setting.

The Man Behind Trump’s Facebook Juggernaut

Brad Parscale used social media to sway the 2016 election. He’s poised to do it again.

I just finished reading important reporting about The Man Behind Trump’s Facebook Juggernaut in the March 9th, 2020 issue of the New Yorker. The long article suggests that it wasn’t Cambridge Analytica or the Russians who swung the 2016 election. If anything had an impact it was the extensive use of social media, especially Facebook, by the Trump digital campaign under the leadership of Brad Parscale. The Clinton campaign focused on TV spots and believed they were going to win. The Trump campaign gathered lots of data, constantly tried new things, and drew on their Facebook “embed” to improve their game.

If each variation is counted as a distinct ad, then the Trump campaign, all told, ran 5.9 million Facebook ads. The Clinton campaign ran sixty-six thousand. “The Hillary campaign thought they had it in the bag, so they tried to play it safe, which meant not doing much that was new or unorthodox, especially online,” a progressive digital strategist told me. “Trump’s people knew they didn’t have it in the bag, and they never gave a shit about being safe anyway.” (p. 49)

One interesting service Facebook offered was “Lookalike Audiences” where you could upload a spotty list of information about people and Facebook would first fill it out from their data and then find you more people who are similar. This lets you expand your list of people to microtarget (and Facebook gets you paying for more targeted ads.)

The end of the article gets depressing as it recounts how little the Democrats are doing to counter or match the social media campaign for Trump which was essentially underway right after the 2016 election. One worries, by the end, that we will see a repeat.

Marantz, Andrew. (2020, March 9). “#WINNING: Brad Parscale used social media to sway the 2016 election. He’s posed to do it again.” New Yorker. Pages 44-55.

Philosophers On GPT-3

GPT-3 raises many philosophical questions. Some are ethical. Should we develop and deploy GPT-3, given that it has many biases from its training, it may displace human workers, it can be used for deception, and it could lead to AGI? I’ll focus on some issues in the philosophy of mind. Is GPT-3 really intelligent, and in what sense? Is it conscious? Is it an agent? Does it understand?

On the Daily Nous (news by and for philosophers) there is a great collection of short essays on OpenAI‘s recently released API to GPT-3, see Philosophers On GPT-3 (updated with replies by GPT-3). And … there is a response from GPT-3. Some of the issues raised include:

Ethics: David Chalmers raises the inevitable ethics issues. Remember that GPT-2 was considered so good as to be dangerous. I don’t know if it is brilliant marketing or genuine concern, but OpenAI is continuing to treat this technology as something to be careful about. Here is Chalmers on ethics,

GPT-3 raises many philosophical questions. Some are ethical. Should we develop and deploy GPT-3, given that it has many biases from its training, it may displace human workers, it can be used for deception, and it could lead to AGI? I’ll focus on some issues in the philosophy of mind. Is GPT-3 really intelligent, and in what sense? Is it conscious? Is it an agent? Does it understand?

Annette Zimmerman in her essay makes an important point about the larger justice context of tools like GPT-3. It is not just a matter of ironing out the biases in the language generated (or used in training.) It is not a matter of finding a techno-fix that makes bias go away. It is about care.

Not all uses of AI, of course, are inherently objectionable, or automatically unjust—the point is simply that much like we can do things with words, we can do things with algorithms and machine learning models. This is not purely a tangibly material distributive justice concern: especially in the context of language models like GPT-3, paying attention to other facets of injustice—relational, communicative, representational, ontological—is essential.

She also makes an important and deep point that any AI application will have to make use of concepts from the application domain and all of these concepts will be contested. There are no simple concepts just as there are no concepts that don’t change over time.

Finally, Shannon Vallor has an essay that revisits Hubert Dreyfus’s critique of AI as not really understanding.

Understanding is beyond GPT-3’s reach because understanding cannot occur in an isolated behavior, no matter how clever. Understanding is not an act but a labor.

In the realm of paper tigers – exploring the failings of AI ethics guidelines

But even the ethical guidelines of the world’s largest professional association of engineers, IEEE, largely fail to prove effective as large technology companies such as Facebook, Google and Twitter do not implement them, notwithstanding the fact that many of their engineers and developers are IEEE members.

AlgorithmWatch is maintaining an inventory of frameworks and principles. Their evaluation is that these are not making much of a difference. See In the realm of paper tigers – exploring the failings of AI ethics guidelines. They also note there are few from the Global South. It seems to be mostly countries that have an AI industry where principles are being published.

The International Review of Information Ethics

The International Review of Information Ethics (IRIE) has just published Volume 28 which collects papers on Artificial Intelligence, Ethics and Society. This issue comes from the AI, Ethics and Society conference that the Kule Institute for Advanced Study (KIAS) organized.

This issue of the IRIE also marks the first issue published on the PKP platform managed by the University of Alberta Library. KIAS is supporting the transition of the journal over to the new platform as part of its focus on AI, Ethics and Society in partnership with the AI for Society signature area.

We are still ironing out all the bugs and missing links, so bear with us, but the platform is solid and the IRIE is now positioned to sustainably publish original research in this interdisciplinary area.

The bad things that happen when algorithms run online shops

Smart software controls the prices and products you see when you shop online – and sometimes it can go spectacularly wrong, discovers Chris Baraniuk.

The BBC has a stroy about The bad things that happen when algorithms run online shops. The story describes how e-commerce systems designed to set prices dynamically (in comparison with someone else’s price, for example) can go wrong and end up charging customers much more than they will pay or charging them virtually nothing so the store loses money.

The story links to an instructive blog entry by Michael Eisen about how two algorithms pushed up the price on a book into the millions, Amazon’s $23,698,655.93 book about flies. The blog entry is a perfect little story about about the problems you get when you have algorithms responding iteratively to each other without any sanity checks.

MIT apologizes, permanently pulls offline huge dataset that taught AI systems to use racist, misogynistic slurs

Vinay Prabhu, chief scientist at UnifyID, a privacy startup in Silicon Valley, and Abeba Birhane, a PhD candidate at University College Dublin in Ireland, pored over the MIT database and discovered thousands of images labelled with racist slurs for Black and Asian people, and derogatory terms used to describe women. They revealed their findings in a paper undergoing peer review for the 2021 Workshop on Applications of Computer Vision conference.

Another one of those “what were they thinking when they created the dataset stories” from The Register tells about how MIT apologizes, permanently pulls offline huge dataset that taught AI systems to use racist, misogynistic slurs. The MIT Tiny Images dataset was created automatically using scripts that used the WordNet database of terms which itself held derogatory terms. Nobody thought to check either the terms taken from WordNet or the resulting images scoured from the net. As a result there are not only lots of images for which permission was not secured, but also racists, sexist, and otherwise derogatory labels on the images which in turn means that if you train an AI on these it will generate racist/sexist results.

The article also mentions a general problem with academic datasets. Companies like Facebook can afford to hire actors to pose for images and can thus secure permissions to use the images for training. Academic datasets (and some commercial ones like the Clearview AI  database) tend to be scraped and therefore will not have the explicit permission of the copyright holders or people shown. In effect, academics are resorting to mass surveillance to generate training sets. One wonders if we could crowdsource a training set by and for people?

Introducing the AI4Society Signature Area

AI4Society will provide institutional leadership in this exciting area of teaching, research, and scholarship.

The Quad has a story Introducing the AI4Society Signature Area. Artificial Intelligence for Society is a University of Alberta Signature Area that brings together researchers and instructors from both the sciences and the arts. AI4S looks at how AI can be imagined, designed, and tested so that it serves society. I’m lucky to contribute to this Area as the Associate Director, working with the Director, Eleni Stroulia from Computing Science.

Google Developers Blog: Text Embedding Models Contain Bias. Here’s Why That Matters.

Human data encodes human biases by default. Being aware of this is a good start, and the conversation around how to handle it is ongoing. At Google, we are actively researching unintended bias analysis and mitigation strategies because we are committed to making products that work well for everyone. In this post, we’ll examine a few text embedding models, suggest some tools for evaluating certain forms of bias, and discuss how these issues matter when building applications.

On the Google Developvers Blog there is an interesting post on Text Embedding Models Contain Bias. Here’s Why That Matters. The post talks about a technique for using Word Embedding Association Tests (WEAT) to see compare different text embedding algorithms. The idea is to see whether groups of words like gendered words associate with positive or negative words. In the image above you can see the sentiment bias for female and male names for different techniques.

While Google is working on WEAT to try to detect and deal with bias, in our case this technique could be used to identify forms of bias in corpora.

COVID-19 contact tracing reveals ethical tradeoffs between public health and privacy

Michael Brown has written a nice article in the U of Alberta folio on COVID-19 contact tracing reveals ethical tradeoffs between public health and privacyThe article quotes me extensively on the subject of the ethics of these new bluetooth contact tracing tools. In the interview I tried the emphasize the importance of knowledge and consent.

  • Users of these apps should know that they are being traced through them, and
  • Users should consent to their use.

There are a variety of these apps from the system pioneered by Singapore called TraceTogether to its Alberta cousin ABTraceTogether. There are also a variety of approaches to tracing people from using credit card records to apps like TraceTogether. The EFF has a good essay on Protecting Civil Rights During a Public Health Crisis that I adapt here to provide guidelines for when one might gather data without knowledge or consent:

  • Medically necessary: There should be a clear and compelling explanation as to how this will save lives.
  • Personal information proportionate to need: The information gathered should fit the need and go no further.
  • Information handled by health informatics specialists: The gathering and processing should be handled by health informatics units, not signals intelligence or security services.
  • Deleted: It should be deleted once it is no longer needed.
  • Not be organized due to vulnerable demographics: The information should not be binned according to stereotypical or vulnerable demographics unless there is a compelling need. We should be very careful that we don’t use the data to further disadvantage groups.
  • Use reviewed afterwards: The should be a review after the crisis is over.
  • Transparency: Government should transparent about what they are gathering and why.
  • Due process: There should be open processes for people to challenge the gathering of their information or to challenge decisions taken as a result of such information.