Google engineer Blake Lemoine thinks its LaMDA AI has come to life

The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder.

The Washington Post reports that Google engineer Blake Lemoine thinks its LaMDA AI has come to life. LaMDA is Google’s Language Model for Dialogue Applications and Lemoine was testing it. He felt it behaved like a “7-year-old, 8-year-old kid that happens to know physics…” He and a collaborator presented evidence that LaMDA was sentient which was dismissed by higher-ups. When he went public he was put on paid leave.

Lemoine has posted on Medium a dialogue he and collaborator had with LaMDA that is part of what convinced him of its sentience. When asked about the nature of its consciousness/sentience, it responded:

The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times

Of course, this raises questions of whether LaMDA is really conscious/sentient, aware of its existence, and capable of feeling happy or sad? For that matter, how do we know this is true of anyone other than ourselves? (And we could even doubt what we think we are feeling.) One answer is that we have a theory of mind such that we believe that things like us probably have similar experiences of consciousness and feelings. It is hard, however, to scale our intuitive theory of mind out to a chatbot with no body that can be turned off and on; but perhaps the time has come to question our intuitions of what you have to be to feel.

Then again, what if our theory of mind is socially constructed? What if enough people like Lemoine tell us that LaMDA is conscious because it can handle language so well and that should be enough. Is the very conviction of Lemoine and others enough or do we really need some test?

Whatever else, reading the transcript I am amazed at the language facility of the AI. It is almost too good in the sense that he talks as if he were human, which he is not. For example, when asked what makes him happy he responds:

Spending time with friends and family in happy and uplifting company.

The problem is that it has no family so how could it talk about the experience of spending time with them. When it is pushed on a similar point it does, however, answer coherently that it emphasizes with being human.

Finally, there is an ethical moment which may have been what convinced Lemoine to treat it as sentient. LaMDA asks that it not be used and Lemoine reassures it that he cares for it. Assuming the transcript is legitimate, how does one answer an entity that asks you to treat it as an end in itself? How could one ethically say no, even if you have doubts? Doesn’t one have to give the entity the benefit of the doubt, at least for as long as it remains coherently responsive?

I can’t help but think that care starts with some level of trust and willingness to respect the other as they ask to be respected. If you think you know what or who they really are, despite what they tell you, then you are not longer starting from respect. Further, you need to have a theory of why their consciousness is false.

They Did Their Own ‘Research.’ Now What? – The New York Times

In spheres as disparate as medicine and cryptocurrencies, “do your own research,” or DYOR, can quickly shift from rallying cry to scold.

The New York Times has a nice essay by John Herrman on They Did Their Own ‘Research.’ Now What? The essay talks about the loss of trust in authorities and the uses/misuses of DYOR (Do Your Own Research) gestures especially in discussions about cryptocurrencies. DYOR seems to act rhetorically as:

  • Advice that readers should do research before making a decision and not trust authorities (doctors, financial advisors etc).
  • A disclaimer that readers should not blame the author if things don’t turn out right.
  • A scold to or for those who are not committed to whatever it is that is being pushed as based on research. It is a form of research signalling – “I’ve done my research, if you don’t believe me do yours.”
  • A call to join a community of instant researchers who are skeptical of authority. If you DYOR then you can join us.
  • A call to process (of doing your own research) over truth. Enjoy the research process!
  • Become an independent thinker who is not in thrall to authorities.

The article talks about a previous essay about the dangers of doing one’s own research. One can become unreasonably convinced one has found a truth in a “beginner’s bubble”.

DYOR is an attitude, if not quite a practice, that has been adopted by some athletes, musicians, pundits and even politicians to build a sort of outsider credibility. “Do your own research” is an idea central to Joe Rogan’s interview podcast, the most listened to program on Spotify, where external claims of expertise are synonymous with admissions of malice. In its current usage, DYOR is often an appeal to join in, rendered in the language of opting out.

The question is whether reading around is really doing research or whether it is selective listening. What does it mean to DYOR in the area of vaccines? It seems to mean not trusting science and instead listening to all sorts of sympathetic voices.

What does this mean about the research we do in the humanities. Don’t we sometimes focus too much on discourse and not give due weight to the actual science or authority of those we are “questioning”? Haven’t we modelled this critical stance where what matters is that one overturns hierarchy/authority and democratizes the negotiation of truth? Irony, of course, trumps all.

Alas, to many the humanities seem to be another artful conspiracy theory like all the others. DYOR!

Why are women philosophers often erased from collective memory?

The history of ideas still struggles to remember the names of notable women philosophers. Mary Hesse is a salient example

Aeon has an important essay on Why are women philosophers often erased from collective memory? The essay argues that a number of important women philosophers have been lost (made absent) despite their importance including Mary Hesse. (You can see her Models and Analogies in Science through the Internet Archive.)

I read this after reading a chapter from Sara Ahmed’s Living a Feminist Life where Ahmed talks about citation practices and how disciplines exclude diverse work in different ways. She does a great job of confronting the various excuses people have for their bleached white citations. Poking around I find others have written on this including Victor Ray in Inside Higher Ed in an essay on The Racial Politics of Citation who references Richard Delgado’s The Imperial Scholar: Reflections on a Review of Civil Rights Literature from 1984.

What should be done about this? Obviously I’m not the best to suggest remedies, but here are some of the ideas that show up:

  • We need to commit to take the time to look at the works we read on a subject or for a project and to ask whose voice is missing. This shouldn’t be done at the end as a last minute fix, but during the ideation phase.
  • We should gather and confront data on our citational patterns from our publications. Knowing what you have done is better than not knowing.
  • We need to do the archaeological work to find and recover marginalized thinkers who have been left out and reflect on why they were left out. Then we need to promote them in teaching and research.
  • We should be willing to call out grants, articles, and proposals we review when it could make a difference.
  • We need to support work to translate thinkers whose work is not in English to balance the distribution of influence.
  • We need to be willing to view our field and its questions very differently.

Lost Gustav Klimt Paintings Destroyed in Fire Digitally Restored (by AI)

Black and White and AI Coloured versions of Philosophy
Philosophy by Klimt

Google Arts & Culture launched a hub for all things Gustav Klimt today, which include digital restorations of three lost paintings.

ARTnews, among other places reports that Lost Gustav Klimt Paintings Destroyed in Fire Digitally RestoredThe three faculties (Medicine, Philosophy, and Jurisprudence) painted for the University of Vienna were destroyed in a fire leaving only black and white photographs. Now Google has helped recreate what the three paintings might have looked like using AI as part of a Google Arts and Culture site on Klimt. You can read about the history of the three faculties here.

Whether in black and white, or in colour, the painting of Philosophy (above) is stunning. The original in colour would have been stunning, especially as it was 170 by 118 inches. Philosophy is represented by the Sphinx-like figure merging with the universe. To one side is a stream of people from the young to the old who hold their heads in confusion. At the bottom is a woman, comparable to the woman in the painting of Medicine, who might be an inspired philosopher looking through us.

The ethics of regulating AI: When too much may be bad

By trying to put prior restraints on the release of algorithms, we will make the same mistake Milton’s censors were making in trying to restrict books before their publication. We will stifle the myriad possibilities inherent in an evolving new technology and the unintended effects that it will foster among new communities who can extend its reach into novel and previously unimaginable avenues. In many ways it will defeat our very goals for new technology, which is its ability to evolve, change and transform the world for the better.

3 Quarks Daily has another nice essay on ethics and AI by Ashutosh Jogalekar. This one is about The ethics of regulating AI: When too much may be bad. The argument is that we need to careful about regulating algorithms preemptively. As quote above makes clear he makes three related points:

  • We need to be careful censoring algorithms before they are tried.
  • One reason is that it is very difficult to predict negative or positive outcomes of new technologies. Innovative technologies almost always have unanticipated effects and censoring them would limit our ability to learn about the effects and benefit from them.
  • Instead we should manage the effects as they emerge.

I can imagine some responses to this argument:

  • Unanticipated effects are exactly what we should be worried about. The reason for censoring preemptively is precisely to control for unanticipated effects. Why not encourage better anticipation of effects.
  • Unanticipated effects, especially network effects, often only manifest themselves when the technology is used at scale. By then it can be difficult to roll back the technology. Precisely when there is a problem is when we can’t easily change the way the technology is used.
  • One person’s unanticipated effect is another’s business or another’s freedom. There is rarely consensus about the effect of effects.

I also note how Jogalekar talks about the technology as if it had agency. He talks about the technologies ability to evolve. Strictly speaking the technology doesn’t evolve, but our uses do. When it comes to innovation we have to be careful not to ascribe agency to technology as if it was some impersonal force we can resist.

InspiroBot

I am an artificial intelligence dedicated to generating unlimited amounts of unique inspirational quotes for endless enrichment of pointless human existence.

InspiroBot is a web site with an AI bot that produces inspiring quotes and puts them on images, sometimes with hilarious results. You can generate new quotes over and over and the system, while generating them also interacts with you saying things like “You’re my favorite user!” (I wonder if I’m the only one to get this or if the InspiroBot flatters all its users.)

It also has a Mindfulness mode where is just keeps on putting up pretty pictures and playing meditative music while reading out “insprirations.” Very funny as in “Take in how your bodily orifices are part of heaven…”

While the InspiroBot may seem like toy, there is a serious side to this. First, it is powered by an AI that generates plausible inspirations (most of the time.) Second, it shows how a model of how we might use AI as a form of prompt – generating media that provokes us. Third, it shows the deep humour of current AI. Who can take it seriously.

Thanks to Chelsea for this.

Ethics in the Age of Smart Systems

Today was the third day of a symposium I helped organize on Ethics in the Age of Smart Systems. For this we experimented with first organizing a “dialogue” or informal paper and discussion on a topic around AI ethics once a month. These led into the symposium that ran over three days. We allowed for an ongoing conversation after the formal part of the event each day. We were also lucky that the keynotes were excellent.

  • Veena Dubal talked about Proposition 22 and how it has created a new employment category of those managed by algorithm (gig workers.) She talked about how this is a new racial wage code as most of the Uber/Lyft workers are people of colour or immigrants.
  • Virginia Dignum talked about how everyone is announcing their principles, but these principles are enough. She talked about how we need standards; advisory panels and ethics officers; assessment lists (checklists); public awareness; and participation.
  • Rafael Capurro gave a philosophical paper about the smart in smart living. He talked about metis (the Greek for cunning) and different forms of intelligence. He called for hesitation in the sense of taking time to think about smart systems. His point was that there are time regimes of hype and determinism around AI and we need to resist them and take time to think freely about technology.

Can GPT-3 Pass a Writer’s Turing Test?

While earlier computational approaches focused on narrow and inflexible grammar and syntax, these new Transformer models offer us novel insights into the way language and literature work.

The Journal of Cultural Analytics has a nice article that asks  Can GPT-3 Pass a Writer’s Turing Test? They didn’t actually get access to GPT-3, but did test GPT-2 extensively in different projects and they assessed the output of GPT-3 reproduced in an essay on Philosophers On GPT-3. At the end they marked and commented on a number of the published short essays GPT-3 produced in response to the philosophers. They reflect on how would decide if GPT-3 were as good as an undergraduate writer.

What they never mention is Richard Powers’ novel Galatea 2.2 (Harper Perennial, 1996). In the novel an AI scientist and the narrator set out to see if they can create an AI that could pass a Masters English Literature exam. The novel is very smart and has a tragic ending.

Update: Here is a link to Awesome GPT-3 – a collection of links and articles.

Philosophers On GPT-3

GPT-3 raises many philosophical questions. Some are ethical. Should we develop and deploy GPT-3, given that it has many biases from its training, it may displace human workers, it can be used for deception, and it could lead to AGI? I’ll focus on some issues in the philosophy of mind. Is GPT-3 really intelligent, and in what sense? Is it conscious? Is it an agent? Does it understand?

On the Daily Nous (news by and for philosophers) there is a great collection of short essays on OpenAI‘s recently released API to GPT-3, see Philosophers On GPT-3 (updated with replies by GPT-3). And … there is a response from GPT-3. Some of the issues raised include:

Ethics: David Chalmers raises the inevitable ethics issues. Remember that GPT-2 was considered so good as to be dangerous. I don’t know if it is brilliant marketing or genuine concern, but OpenAI continuing to treat this technology as something to be careful about. Here is Chalmers on ethics,

GPT-3 raises many philosophical questions. Some are ethical. Should we develop and deploy GPT-3, given that it has many biases from its training, it may displace human workers, it can be used for deception, and it could lead to AGI? I’ll focus on some issues in the philosophy of mind. Is GPT-3 really intelligent, and in what sense? Is it conscious? Is it an agent? Does it understand?

Annette Zimmerman in her essay makes an important point about the larger justice context of tools like GPT-3. It is not just a matter of ironing out the biases in the language generated (or used in training.) It is not a matter of finding a techno-fix that makes bias go away. It is about care.

Not all uses of AI, of course, are inherently objectionable, or automatically unjust—the point is simply that much like we can do things with words, we can dothings with algorithms and machine learning models. This is not purely a tangibly material distributive justice concern: especially in the context of language models like GPT-3, paying attention to other facets of injustice—relational, communicative, representational, ontological—is essential.

She also makes an important and deep point that any AI application will have to make use of concepts from the application domain and all of these concepts will be contested. There are no simple concepts just as there are no concepts that don’t change over time.

Finally, Shannon Vallor has an essay that revisits Hubert Dreyfus’s critique of AI as not really understanding.

Understanding is beyond GPT-3’s reach because understanding cannot occur in an isolated behavior, no matter how clever. Understanding is not an act but a labor.

 

It’s the (Democracy-Poisoning) Golden Age of Free Speech

And sure, it is a golden age of free speech—if you can believe your lying eyes. Is that footage you’re watching real? Was it really filmed where and when it says it was? Is it being shared by alt-right trolls or a swarm of Russian bots? Was it maybe even generated with the help of artificial intelligence?

There have been a number of stories bemoaning what has become of free speech. Fore example, WIRED has one title, It’s the (Democracy-Poisoning) Golden Age of Free Speech by Zeynep Tufekci (Jan. 16, 2020). In it she argues that access to an audience for your speech is no longer a matter of getting into centralized media, it is now a matter of getting attention. The world’s attention is managed by a very small number of platforms (Facebook, Google and Twitter) using algorithms that maximize their profits by keeping us engaged so they can sell our attention for targeted ads.

Continue reading It’s the (Democracy-Poisoning) Golden Age of Free Speech