Can GPT-3 Pass a Writer’s Turing Test?

While earlier computational approaches focused on narrow and inflexible grammar and syntax, these new Transformer models offer us novel insights into the way language and literature work.

The Journal of Cultural Analytics has a nice article that asks  Can GPT-3 Pass a Writer’s Turing Test? They didn’t actually get access to GPT-3, but did test GPT-2 extensively in different projects and they assessed the output of GPT-3 reproduced in an essay on Philosophers On GPT-3. At the end they marked and commented on a number of the published short essays GPT-3 produced in response to the philosophers. They reflect on how would decide if GPT-3 were as good as an undergraduate writer.

What they never mention is Richard Powers’ novel Galatea 2.2 (Harper Perennial, 1996). In the novel an AI scientist and the narrator set out to see if they can create an AI that could pass a Masters English Literature exam. The novel is very smart and has a tragic ending.

Update: Here is a link to Awesome GPT-3 – a collection of links and articles.

Philosophers On GPT-3

GPT-3 raises many philosophical questions. Some are ethical. Should we develop and deploy GPT-3, given that it has many biases from its training, it may displace human workers, it can be used for deception, and it could lead to AGI? I’ll focus on some issues in the philosophy of mind. Is GPT-3 really intelligent, and in what sense? Is it conscious? Is it an agent? Does it understand?

On the Daily Nous (news by and for philosophers) there is a great collection of short essays on OpenAI‘s recently released API to GPT-3, see Philosophers On GPT-3 (updated with replies by GPT-3). And … there is a response from GPT-3. Some of the issues raised include:

Ethics: David Chalmers raises the inevitable ethics issues. Remember that GPT-2 was considered so good as to be dangerous. I don’t know if it is brilliant marketing or genuine concern, but OpenAI continuing to treat this technology as something to be careful about. Here is Chalmers on ethics,

GPT-3 raises many philosophical questions. Some are ethical. Should we develop and deploy GPT-3, given that it has many biases from its training, it may displace human workers, it can be used for deception, and it could lead to AGI? I’ll focus on some issues in the philosophy of mind. Is GPT-3 really intelligent, and in what sense? Is it conscious? Is it an agent? Does it understand?

Annette Zimmerman in her essay makes an important point about the larger justice context of tools like GPT-3. It is not just a matter of ironing out the biases in the language generated (or used in training.) It is not a matter of finding a techno-fix that makes bias go away. It is about care.

Not all uses of AI, of course, are inherently objectionable, or automatically unjust—the point is simply that much like we can do things with words, we can dothings with algorithms and machine learning models. This is not purely a tangibly material distributive justice concern: especially in the context of language models like GPT-3, paying attention to other facets of injustice—relational, communicative, representational, ontological—is essential.

She also makes an important and deep point that any AI application will have to make use of concepts from the application domain and all of these concepts will be contested. There are no simple concepts just as there are no concepts that don’t change over time.

Finally, Shannon Vallor has an essay that revisits Hubert Dreyfus’s critique of AI as not really understanding.

Understanding is beyond GPT-3’s reach because understanding cannot occur in an isolated behavior, no matter how clever. Understanding is not an act but a labor.

 

It’s the (Democracy-Poisoning) Golden Age of Free Speech

And sure, it is a golden age of free speech—if you can believe your lying eyes. Is that footage you’re watching real? Was it really filmed where and when it says it was? Is it being shared by alt-right trolls or a swarm of Russian bots? Was it maybe even generated with the help of artificial intelligence?

There have been a number of stories bemoaning what has become of free speech. Fore example, WIRED has one title, It’s the (Democracy-Poisoning) Golden Age of Free Speech by Zeynep Tufekci (Jan. 16, 2020). In it she argues that access to an audience for your speech is no longer a matter of getting into centralized media, it is now a matter of getting attention. The world’s attention is managed by a very small number of platforms (Facebook, Google and Twitter) using algorithms that maximize their profits by keeping us engaged so they can sell our attention for targeted ads.

Continue reading It’s the (Democracy-Poisoning) Golden Age of Free Speech

Artificial intelligence: Commission takes forward its work on ethics guidelines

The European Commission has announced the next step in its Artificial Intelligence strategy. See Artificial intelligence: Commission takes forward its work on ethics guidelines. The appointed a High-Level Expert Group in June of 2018. This group has now developed Seven essentials for achieving trustworthy AI:

Trustworthy AI should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

The next step has now been announced and that is a pilot phase that tests these essentials with stakeholders. The Commission also wants to cooperate with “like-minded partners” like Canada.

What would it mean to participate in the pilot?

Ethicists are no more ethical than the rest of us, study finds

When it comes to the crucial ethical question of calling one’s mother, most people agreed that not doing so was a moral failing.

Quartz reports on a study in Philosophical Psychology that Ethicists are no more ethical than the rest of us, study finds — Quartz. While one wonders how one can survey how ethical someone is, this is nonetheless a believable result. The contemporary university is structured deliberately not to be a place to change people’s morals, but to educate them. When we teach ethics we don’t assess or grade the morality of the student. Likewise, when we hire, promote, and assess the ethics of a philosophy professor we also don’t assess their personal morality. We assess their research, teaching and service record, all of which can be burnished without actually being ethical. There is, if you will, a professional ethic that research and teaching should not be personal, but be detached.

A focus on the teaching and learning of ethics over personal morality is, despite the appearance of hypocrisy, a good thing. We try to create in the university, in the class, and in publications, an openness to ideas, whoever they come from. By avoiding discussing personal morality we try to create a space where people of different views can enter into dialogue about ethics. Imagine what it would be like if it were otherwise? Imagine if my ethics class was about converting students to some standard of behaviour. Who would decide what that standard was? The ethos of professional ethics is one that emphasizes dialogue over action, history over behaviour, and ethical argumentation over disposition. Would it be ethical any other way?

The structure of recent philosophy (II) · Visualizations

In this codebook we will investigate the macro-structure of philosophical literature. As a base for our investigation I have collected about fifty-thousand reco

Stéfan sent me a link to this interesting post, The structure of recent philosophy (II) · Visualizations. Maximilian Noichl has done a fascinating job using the Web of Science to develop a model of the field of Philosophy since the 1950s. In this post he describes his method and the resulting visualization of clusters (see above). In a later post (version III of the project) he gets a more nuanced visualization that seems more true to the breadth of what people do in philosophy. The version above is heavily weighted to anglo-american analytic philosophy while version III has more history of philosophy and continental philosophy.

Here is the final poster (PDF) for version III.

I can’t help wondering if his snowball approach doesn’t bias the results. What if one used full text of major journals?

Google AI experiment has you talking to books

Google has announced some cool text projects. See Google AI experiment has you talking to books. One of them, Talk to Books, lets you ask questions or type statements and get answers that are passages from books. This strikes me as a useful research tool as it allows you to see some (book) references that might be useful for defining an issue. The project is somewhat similar to the Veliza tool that we built into Voyant. Veliza is given a particular text and then uses an Eliza-like algorithm to answer you with passages from the text. Needless to say, Talking to Books is far more sophisticated and is not based simply on word searches. Veliza, on the other hand can be reprogrammed and you can specify the text to converse with.

Continue reading Google AI experiment has you talking to books

The Aggregate IQ Files, Part One: How a Political Engineering Firm Exposed Their Code Base

The Research Director for UpGuard, Chris Vickery (@VickerySec) has uncovered code repositories from AggregateIQ, the Canadian company that was building tools for/with SCL and Cambridge Analytica. See The Aggregate IQ Files, Part One: How a Political Engineering Firm Exposed Their Code Base and AggregateIQ Created Cambridge Analytica’s Election Software, and Here’s the Proof from Gizmodo.

The screenshots from the repository show on project called ephemeral with a description “Because there is no such thing as THE TRUTH”. The “Primary Data Storage” of Ephemeral is called “Mamba Jamba”, presumably a joke on “mumbo jumbo” which isn’t a good sign. What is mort interesting is the description (see image above) of the data storage system as “The Database of Truth”. Here is a selection of that description:

The Database of Truth is a database system that integrates, obtains, and normalizes data from disparate sources including starting with the RNC data trust.  … This system will be created to make decisions based upon the data source and quality as to which data constitutes the accepted truth and connect via integrations or API to the source systems to acquire and update this data on a regular basis.

A robust front-end system will be built that allows an authrized user to query the Database of Truth to find data for a particular upcoming project, to see how current the data is, and to take a segment of that data and move it to the Escrow Database System. …

The Database of Truth is the Core source of data for the entire system. …

One wonders if there is a philosophical theory, of sorts, in Ephemeral. A theory where no truth is built on the mumbo jumbo of a database of truth(s).

Ephemeral would seem to be part of Project Ripon, the system that Cambridge Analytica never really delivered to the Cruz campaign. Perhaps the system was so ephemeral that it never worked and therefore the Database of Truth never held THE TRUTH. Ripon might be better called Ripoff.

When Women Stopped Coding

The NPR show Planet Money aired a show in 2014 on When Women Stopped Coding that looks at why the participation of women in computer science changed in 1984 after rising for a decade. Unlike other professional programs like medical school and law school, the percent participation of women when from about 37% in 1984 down to under 20% today. The NPR story suggests that the problem is the promotion of the personal computer at the moment when it became affordable. In the 1980s they were heavily marketed to boys which meant that far more men came to computer science in college with significant experience with computing, something that wasn’t true in the 70s when there weren’t that many computers in the home and math is what mattered. The story builds on research by Jane Margolis and in particular her book Unlocking the Clubhouse.

This fits with my memories of the time. I remember being jealous of the one or two kids who had Apple IIs in college (in the late 70s) and bought an Apple II clone (a Lemon?) as soon has I had a job just to start playing with programming. At college I ended up getting 24/7 access to the computing lab in order to be able to use the word processing available (a Pascal editor and Diablo daisy wheel printer for final copy.) I hated typing and retyping my papers and fell in love with the backspace key and editing of word processing. I also remember the sense of comradery among those who spent all night in the lab typing papers in the face of our teacher’s mistrust of processed text. Was it coincidence that the two of us who shared the best senior thesis prize in philosophy in 1892 wrote our theses in the lab on computers? What the story doesn’t deal with, that Margolis does, is the homosocial club-like atmosphere around computing. This still persists. I’m embarrassed to think of how much I’ve felt a sense of belonging to these informal clubs without asking who was excluded.

Silly Season for Eric Raymond

Eric Raymond, widely admired for his The Cathedral and the Bazaar, is now peddling social justice paranoia. See Why Hackers Must Eject the SJWs. He starts with the following,

The hacker culture, and STEM in general, are under ideological attack. Recently I blogged a safety warning that according to a source I consider reliable, a “women in tech” pressure group has made multiple efforts to set Linus Torvalds up for a sexual assault accusation. I interpreted this as an attempt to beat the hacker culture into political pliability, and advised anyone in a leadership position to beware of similar attempts.

See his “safety warning” at From kafkatrap to honeytrap. His evidence for this ideological attack seems to be gossip from trusted sources – gossip that confirms his views about “women in tech” and pressure groups and so on. This sort of war rhetoric closes any opportunity for discussion around the issues of women in technology. For Raymond it is now a (culture) war between those on the side of hacker culture and STEM, against “Social Justice Warriors” and what is at stake is the “entire civilization that we serve.”

Why are these important issues being militarized instead of aired respectfully? When did the people we live with and love become the other? Just how confident are we that we objectively know what merit is in the hurly burly of life?  What civilization is this really about?

Other reactions to this story include Linus Torvalds targeted by honeytraps, claims Eric S. Raymond in The Register and Is This the Perfect Insane Anti-Feminist Rumor? from New York Magazine.