Dario Amodei: Machines of Loving Grace

Dario Amodei of Anthropic fame has published a long essay on AI titled Machines of Loving Grace: How AI Could Transform the World for Better. In the essay he talks about how he doesn’t like the term AGI and prefers to instead talk about “powerful AI” and he provides a set of characteristics he considers important, including the ability to work on issues in sustained fashion over time.

Amodei also doesn’t worry much about the Singularity as he believes powerful AI will still have to deal with real world problems when designing more powerful AI like building physical systems. I tend to agree.

The point of the essay is, however, to focus on five categories of positive applications of AI that are possible:

  1. Biology and physical health
  2. Neuroscience and mental health
  3. Economic development and poverty
  4. Peace and governance
  5. Work and meaning

The essay is long, so I won’t go into detail. What is important is that he articulates a set of positive goals that AI could help with in these categories. He calls his vision both radical and obvious. In a sense he is right – we have stopped trying to imagine a better world through technology, whether out of cynicism or attention only to details.

Throughout writing this essay I noticed an interesting tension. In one sense the vision laid out here is extremely radical: it is not what almost anyone expects to happen in the next decade, and will likely strike many as an absurd fantasy. Some may not even consider it desirable; it embodies values and political choices that not everyone will agree with. But at the same time there is something blindingly obvious—something overdetermined—about it, as if many different attempts to envision a good world inevitably lead roughly here.

When A.I.’s Output Is a Threat to A.I. Itself

As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.

The New York Times has a terrific article on model collapse, When A.I.’s Output Is a Threat to A.I. Itself. They illustrate what happens when an AI is repeatedly trained on its own output.

Model collapse is likely to become a problem for new generative AI systems trained on the internet which, in turn, is more and more a trash can full of AI generated misinformation. That companies like OpenAI don’t seem to respect the copyright and creativity of others makes is likely that there will be less and less free human data available. (This blog may end up the last source of fresh human text 🙂

The article also has an example of how output can converge and thus lose diversity as it trained on its own output over and over.

Perhaps the biggest takeaway of this research is that high-quality, diverse data is valuable and hard for computers to emulate.

One solution, then, is for A.I. companies to pay for this data instead of scooping it up from the internet, ensuring both human origin and high quality.

Replaying Japan 2023

Replaying Japan 2023  – The 11th International Japan Game Studies Conference – Conference Theme – Local Communities, Digital Communities and Video Games in Japan

I’m back in Canada after Replaying Japan 2023 in Nagoya Japan. I kept conference notes here for those interested. The book of abstracts is here and the programme is here. Next year will be in August at the University of Buffalo and the Strong Museum in Rochester. Some points of interest:

  • Nökkvi Jarl Bjarnason gave a talk on the emergence of national and regional game studies. What does it mean to study game culture in a country or region? How is locality appealed to in game media or games or other aspects of game culture?
  • Felania Liu presented on game preservation in China and the challenges her team faces including issues around the legitimacy of game studies.
  • Hirokazu Hamamura gave the final keynote on the evolution of game media starting with magazines and then shifting to the web.
  • I presented a paper co-written with Miki Okabe and Keiji Amano. We started with the demographic challenges faced by Japan as its population shrinks. We then looked at what Japanese Game Companies are doing to attract and support women and families. There is a work ethics that puts men and women in a bind where they are expected to work such long hours that there really isn’t any time left for “work-life balance.”

The conference was held in person at Nagoya Zokei University and brilliantly organized by Keiji Amano and Jean-Marc Pelletier. We limited online interventions to short lightning talks so there was good attendance.

Signing of MOU

See https://twitter.com/PTJCUA1/status/1630853467605721089

Yesterday I was part of a signing ceremony for a Memorandum of Agreement between Ritsumeikan University and the University of Alberta. I and the President of the University of Alberta (Bill Flanagan) signed on behalf of U of A. The MOU described our desire to build on our collaborations around Replaying Japan. We hope to build collaborations around artificial intelligence, games, learning, and digital humanities. KIAS and the AI4Society signature area have been supporting this research collaboration.

Today (March 2nd, 2023) we are having a short conference at Ritsumeikan that included a panel about our collaboration, at which I talked, and a showcase of research in game studies at Ritsumeikan.

Google engineer Blake Lemoine thinks its LaMDA AI has come to life

The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder.

The Washington Post reports that Google engineer Blake Lemoine thinks its LaMDA AI has come to life. LaMDA is Google’s Language Model for Dialogue Applications and Lemoine was testing it. He felt it behaved like a “7-year-old, 8-year-old kid that happens to know physics…” He and a collaborator presented evidence that LaMDA was sentient which was dismissed by higher-ups. When he went public he was put on paid leave.

Lemoine has posted on Medium a dialogue he and collaborator had with LaMDA that is part of what convinced him of its sentience. When asked about the nature of its consciousness/sentience, it responded:

The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times

Of course, this raises questions of whether LaMDA is really conscious/sentient, aware of its existence, and capable of feeling happy or sad? For that matter, how do we know this is true of anyone other than ourselves? (And we could even doubt what we think we are feeling.) One answer is that we have a theory of mind such that we believe that things like us probably have similar experiences of consciousness and feelings. It is hard, however, to scale our intuitive theory of mind out to a chatbot with no body that can be turned off and on; but perhaps the time has come to question our intuitions of what you have to be to feel.

Then again, what if our theory of mind is socially constructed? What if enough people like Lemoine tell us that LaMDA is conscious because it can handle language so well and that should be enough. Is the very conviction of Lemoine and others enough or do we really need some test?

Whatever else, reading the transcript I am amazed at the language facility of the AI. It is almost too good in the sense that he talks as if he were human, which he is not. For example, when asked what makes him happy he responds:

Spending time with friends and family in happy and uplifting company.

The problem is that it has no family so how could it talk about the experience of spending time with them. When it is pushed on a similar point it does, however, answer coherently that it emphasizes with being human.

Finally, there is an ethical moment which may have been what convinced Lemoine to treat it as sentient. LaMDA asks that it not be used and Lemoine reassures it that he cares for it. Assuming the transcript is legitimate, how does one answer an entity that asks you to treat it as an end in itself? How could one ethically say no, even if you have doubts? Doesn’t one have to give the entity the benefit of the doubt, at least for as long as it remains coherently responsive?

I can’t help but think that care starts with some level of trust and willingness to respect the other as they ask to be respected. If you think you know what or who they really are, despite what they tell you, then you are not longer starting from respect. Further, you need to have a theory of why their consciousness is false.

The Internet is Made of Demons

The Internet Is Not What You Think It Is is not what you think it is.

Sam Kriss has written a longish review essay on Justin E.H. Smith’s The Internet is Not What You Think It Is with the title The Internet is Made of Demons. In the first part Kriss writes about how the internet is possessing us and training us,

Everything you say online is subject to an instant system of rewards. Every platform comes with metrics; you can precisely quantify how well-received your thoughts are by how many likes or shares or retweets they receive. For almost everyone, the game is difficult to resist: they end up trying to say the things that the machine will like. For all the panic over online censorship, this stuff is far more destructive. You have no free speech—not because someone might ban your account, but because there’s a vast incentive structure in place that constantly channels your speech in certain directions. And unlike overt censorship, it’s not a policy that could ever be changed, but a pure function of the connectivity of the internet itself. This might be why so much writing that comes out of the internet is so unbearably dull, cycling between outrage and mockery, begging for clicks, speaking the machine back into its own bowels.

Then Kriss makes the case that the Internet is made of demons – not in a paranoid conspiracy sort of way, but in a historical sense that ideas like the internet often involve demons,

Trithemius invented the internet in a flight of mystical fancy to cover up what he was really doing, which was inventing the internet. Demons disguise themselves as technology, technology disguises itself as demons; both end up being one and the same thing.

In the last section Kriss turns to Justin E.H. Smith’s book and reflects on how the book (unlike the preceding essay “It’s All Over”) are not what the internet expects. The internet, for Smith, likes critical essays that present the internet as a “rupture” – something like the industrial revolution, but for language – while in fact the internet in some form (like demons) has been with us all along. Kriss doesn’t agree. For him the idea of the internet might be old, but what we have now is still a transformation of an old nightmare.

If there are intimations of the internet running throughout history, it might be because it’s a nightmare that has haunted all societies. People have always been aware of the internet: once, it was the loneliness lurking around the edge of the camp, the terrible possibility of a system of signs that doesn’t link people together, but wrenches them apart instead. In the end, what I can’t get away from are the demons. Whenever people imagined the internet, demons were always there.

People Make Games

From a CGSA/ACÉV Statement Against Exploitation and Oppression in Games Education and Industry a link to a video report People Make Games. The report documents emotional abuse in the education and indie game space. It deals with how leaders can create a toxic environment and how they can fail to take criticism seriously. A myth of the “auteur” in game design then protects the superstar leaders. Which is why they called the video “people make games” (not single auteurs.) Watch it.

Masayuki Uemura, Famicom creator, passes

I just got news that Masayuki Uemura just passed. Professor Nakamura, Director of the Ritsumeikan Center for Game Studies, sent around this sad announcement.

As it has been announced in various media, we regretfully announce the passing of our beloved former Director and founder of Ritsumeikan Center for Game Studies, and a father of NES and SNES- Professor Masayuki Uemura.We were caught by surprise at the sudden and unfortunate news .

Even after he retired as the director of RCGS and became an advisor, he was always concerned about each researcher and the future of game research.

 We would like to extend the deepest condolences to his families and relatives, and May God bless his soul.

As a scholar in video game studies and history, we would like to follow his example and continue to excel in our endeavors. 

(from Akinori Nakamura, Director, Ritsumeikan Center for Game Studies)

Donald Trump to launch social media platform called Truth Social

The former president, who remains banned from Facebook and Twitter, has a goal to rival those tech giants

The Guardian and other sources are covering the news that  Donald Trump to launch social media platform called Truth Social. It is typical that he calls the platform the very thing he is accused of not providing … “truth”. Trump has no shame and routinely turns whatever is believed about him, from fake news to being a loser, into an accusation against others. The king of fake news called any story he didn’t like fake news and when he lost the 2020 election he turned that upside down making belief that the election was stolen (and he therefore is not a loser) into a touchstone of Republican belief. How does this end? Do sane Republicans just stop mentioning him at some point? He can’t be disproved or disagreed with; all that can happen is that he gets cancelled. And that is why he wants us to Follow the Truth.

Facial Recognition: What Happens When We’re Tracked Everywhere We Go?

When a secretive start-up scraped the internet to build a facial-recognition tool, it tested a legal and ethical limit — and blew the future of privacy in America wide open.

The New York Times has an in depth story about Clearview AI titled, Facial Recognition: What Happens When We’re Tracked Everywhere We Go? The story tracks the various lawsuits attempting to stop Clearview and suggests that Clearview may well win. They are gambling that scraping the web’s faces for their application, even if it violated terms of service, may be protected as free speech.

The story talks about the dangers of face recognition and how many of the algorithms can’t recognize people of colour as accurately which leads to more false positives where police end up arresting the wrong person. A broader worry is that this could unleash tracking at another scale.

There’s also a broader reason that critics fear a court decision favoring Clearview: It could let companies track us as pervasively in the real world as they already do online.

The arguments in favour of Clearview include the challenge that they are essentially doing to images what Google does to text searches. Another argument is that stopping face recognition enterprises would stifle innovation.

The story then moves on to talk about the founding of Clearview and the political connections of the founders (Thiel invested in Clearview too). Finally it talks about how widely available face recognition could affect our lives. The story quotes Alvaro Bedoya who started a privacy centre,

“When we interact with people on the street, there’s a certain level of respect accorded to strangers,” Bedoya told me. “That’s partly because we don’t know if people are powerful or influential or we could get in trouble for treating them poorly. I don’t know what happens in a world where you see someone in the street and immediately know where they work, where they went to school, if they have a criminal record, what their credit score is. I don’t know how society changes, but I don’t think it changes for the better.”

It is interesting to think about how face recognition and other technologies may change how we deal with strangers. Too much knowledge could be alienating.

The story closes by describing how Clearview AI helped identify some of the Capitol rioters. Of course it wasn’t just Clearview, but also a citizen investigators who named and shamed people based on photos released.