A Mirror Test for AI

I recently tried a version of a Mirror Test on ChatGPT 4. I asked it “What is life?” and then for each answer I copied the first sentence or two and put it back into the prompt. I wanted to see if ChatGPT was aware that I was parroting their answers back to them. Alas not … it just kept churning through various answers. The openings of its answers were, in order of appearance:

The question “What is life?” has been pondered by philosophers, scientists, theologians, and thinkers of all disciplines for millennia. …

Indeed, the question of “What is life?” is one of the most profound and enduring inquiries in human history. …

Absolutely. The quest to define and understand life is deeply embedded in the human psyche and is reflected in our art, science, philosophy, and spirituality. …

It didn’t repeat itself, but it didn’t ask me why I was repeating what it said. Obviously it fails the Mirror Test.

 

 

Artificial General Intelligence Is Already Here

Today’s most advanced AI models have many flaws, but decades from now, they will be recognized as the first true examples of artificial general intelligence.

Blaise Agüera Y Arcas and Peter Norvig have an essay making the argument that  Artificial General Intelligence Is Already Here. Their point is that the latest machines like ChatGPT are far more general than previous narrow AIs. They may not be as general as a human, at least without embodiment, but they can do all sorts of textual tasks including tasks not deliberately programmed into them. Some of the ways they are general include their ability to deal with all sorts of topics, their ability to do different types of tasks, their ability to deal with different modalities (images, text …), their language ability, and instructability.

The article also mentions reasons why people are still reluctant to admit that we have a form of AGI:

  • “A healthy skepticism about metrics for AGI

  • An ideological commitment to alternative AI theories or techniques

  • A devotion to human (or biological) exceptionalism

  • A concern about the economic implications of AGI”

To some extent the goal post changes as AI’s solve different challenges. We used to think playing chess well was a sign of intelligence, now that we know how a computer can do it, it no longer seems a test of intelligence.

 

AI Has Already Taken Over. It’s Called the Corporation

If corporations were in fact real persons, they would be sociopaths, completely lacking the ability for empathy that is a crucial element of normal human behavior. Unlike humans, however, corporations are theoretically immortal, cannot be put in prison, and the larger multinationals are not constrained by the laws of any individual country.

Jeremy Lent has an essay arguing that AI Has Already Taken Over. It’s Called the Corporation. He isn’t the only one making this point. Indrajit (Indi) Samarajiva has a Medium essay on Corporations Are Already AI that corporations are legally artificial people with many of the rights of people. They can own property (including people), they have agency, they communicate, and they have intelligence. Just because they aren’t software running on a computer doesn’t make them artificial intelligences.

As Samarajiva points out, it would be interesting to review the history of the corporation looking at examples like the Dutch East India Company to see if we can understand how AGIs might also emerge and interact with us. He feels that Corporate AIs hate us or at least are indifferent.

Another essay that also touches on this is a diary entry by David Runciman on AI in the London Review of Books. His reflections on how our fears about AI mirror earlier fears about corporations are worth quoting in full,

Just as adult human beings are not the only model for natural intelligence – along with children, we heard about the intelligence of plants and animals – computers are not the only model for intelligence of the artificial kind. Corporations are another form of artificial thinking machine, in that they are designed to be capable of taking decisions for themselves. Information goes in and decisions come out that cannot be reduced to the input of individual human beings. The corporation speaks and acts for itself. Many of the fears that people now have about the coming age of intelligent robots are the same ones they have had about corporations for hundreds of years. If these artificial creatures are taking decisions for us, how can we hold them to account for what they do? In the words of the 18th-century jurist Edward Thurlow, ‘corporations have neither bodies to be punished nor souls to be condemned; they may therefore do as they like.’ We have always been fearful of mechanisms that ape the mechanical side of human intelligence without the natural side. We fear that they lack a conscience. They can think for themselves, but they don’t really understand what it is that they are doing.

OpenAI Changes its Core Values

An article on Semafor points out that OpenAI has changed their list of “Core Values” on their Careers page. Previously, they listed their values as being:

Audacious, Thoughtful, Unpretentious, Pragmatic & Impact-Driven, Collaborative, and Growth-oriented

Now, the list of values has been changed to:

AGI focus, Intense and scrappy, Scale, Make something people love, Team spirit

In particular, the first value reads:

AGI focus

We are committed to building safe, beneficial AGI that will have a massive positive impact on humanity’s future.

Anything that doesn’t help with that is out of scope.

This is an unambiguous change from the value of being “Audacious”, which they had glossed with “We make big bets and are unafraid to go against established norms.” They are now committed to AGI (Artificial General Intelligence) which they define on their Charter page as “highly autonomous systems that outperform humans at most economically valuable work”.

It would appear that they are committed to developing AGI that can outperform humans at work that pays and making that beneficial. I can’t help wondering why they aren’t also open to developing AGIs that can perform work that isn’t necessarily economically valuable. For that matter, what if the work AGIs can do becomes uneconomic because it can be cheaply done by an AI?

More challenging is the tension around developing AIs that can outperform humans at work that pays. How can creating AGIs that can take our work become a value? How will they make sure this is going to benefit humanity? Is this just a value in the sense of a challenge (can we make AIs that can make money?) or is there an underlying economic vision, and what would that be? I’m reminded of the ambiguous picture Ishiguro presents in Klara and the Sun of a society where only a minority of people are competitive with AIs.

Diversity Commitment

Right above the list of core values on the Careers page, there is a strong diversity statement that reads:

The development of AI must be carried out with a knowledge of and respect for the perspectives and experiences that represent the full spectrum of humanity.

This is not in the list of values, but it is designed to stand out and open the values. One wonders if this is just an afterthought or virtue signalling. Given that it is on the Careers page, it could be a warning about what they expect of applicants. “Don’t apply unless you can talk EDI!” It isn’t a commitment to diverse hirings; it is more about what they expect potential hires to know and respect.

Now, they can develop a chatbot that can test applicant’s knowledge and respect of diversity and save themselves the trouble of diversity hiring.

(Minor edits suggested by ChatGPT.)

The Emergence of Presentation Software and the Prehistory of PowerPoint

PowerPoint presentations have taken over the world despite Edward Tufte’s pamphlet The Cognitive Style of PowerPoint. It seems that in some contexts the “deck” has become the medium of information exchange rather than the report, paper or memo. In Slashdot I came across a link to a MIT Review essay titled, Next slide, please: A brief history of the corporate presentation. Another history is available from the Computer History Museum, Slide Logic: The Emergence of Presentation Software and the Prehistory of PowerPoint.

I remember the beginnings of computer-assisted presentations. My unit at the University of Toronto Computing Services experimented with the first tools and projectors. The three-gun projectors were finicky to set up and I felt a little guilty promoting set ups which I knew would take lots of technical support. In one presentation on digital presentations there was actually a colleague under the table making sure all the technology worked while I pitched it to faculty.

I also remember tools before PowerPoint. MORE was an outliner and thinking tool that had a presentation mode much the way Mathematica does. MORE was developed by Dave Winer who had a nice page on the history of outline processors he worked on here. It he leaves out how Douglas Engelbart’s Mother of All Demos in 1968 showed something like outlining too.

Alas, PowerPoint came to dominate though now we have a bunch of innovative presentation tools that work on the web from Google Sheets to Prezi.

Now back to Tufte. His critique still stands. Presentation tools have a cognitive style that encourages us to break complex ideas into chunks and then show one chunk at a time in a linear sequence. He points out that a well designed handout or pamphlet (like his pamphlet on The Cognitive Style of PowerPoint) can present a lot more information in a way that doesn’t hide the connections. You can have something more like a concept map that you take people through on a tour. Prezi deserves credit for paying attention to Tufte and breaking out of the linear style.

Now, of course, there are AI tools that can generate presentations like Presentations.ai or Slideoo. You can see a list of a number of them here. No need to know what you’re presenting, an AI will generate the content, design the slides, and soon present it too.

‘New York Times’ considers legal action against OpenAI as copyright tensions swirl : NPR

The news publisher and maker of ChatGPT have held tense negotiations over striking a licensing deal for the use of the paper’s articles to train the chatbot. Now, legal action is being considered.

Finally we are seeing a serious challenge to the way AI companies are exploiting written resources on the web as the New York Times engaged Open AI,  ‘New York Times’ considers legal action against OpenAI as copyright tensions swirl.

A top concern for the Times is that ChatGPT is, in a sense, becoming a direct competitor with the paper by creating text that answers questions based on the original reporting and writing of the paper’s staff.

It remains to be seen what the legalities are. Does using a text in order to train a model constitute the making of a copy in violation of copyright? Does the model contain something equivalent to a copy of the original? These issues are being explored in the AI image generating space where Stability AI is being sued by Getty Images. I hope the New York Times doesn’t just settle quietly before there is a public airing of the issues around the exploitation/ownership of written work. I also note that the Author’s Guild is starting to advocate on behalf of authors,

“It says it’s not fair to use our stuff in your AI without permission or payment,” said Mary Rasenberger, CEO of The Author’s Guild. The non-profit writers’ advocacy organization created the letter, and sent it out to the AI companies on Monday. “So please start compensating us and talking to us.”

This could also have repercussions in academia as many of us scrape the web and social media when studying contemporary issues. For that matter what do we think about the use of our work? One could say that our work, supported as it is by the public, should be fair game from gathering, training and innovative reuse. Aren’t we supported for the public good? Perhaps we should assert that academic prose is available for training models?

What are our ethics?

The Illusion Of AI’s Existential Risk

In sum, AI acting on its own cannot induce human extinction in any of the ways that extinctions have happened in the past. Appeals to the competitive nature of evolution or previous instances of a more intelligent species causing the extinction of a less intelligent species reflect a common mischaracterization of evolution by natural selection.

Could artificial intelligence (AI) soon get to the point where it could enslave us? An Amii colleague sent me to this sensible article, The Illusion Of AI’s Existential Risk that argues that it is extremely unlikely that an AI could evolve to the point where it could manipulate us and prevent us from turning it off. One of the points they make is that the situation is completely different from past extinctions.

Our safety is the topic of Brian Christian’s excellent The Alignment Problem book which talks about different approaches to developing AIs so they are aligned with our values. An important point made by Stuart Russell and quoted in the book is that we don’t want AIs to have the same values as us, we want them to value our having values and to pay attention to our values.

This raises the question of how an AI might know what we value. One approach is Constitutional AI where we train ethical AIs on a constitution that captures our values and then use it to model others.

One of the problems, however, with ethics is that human ethics isn’t simple and may not be something one can capture in a constitution. For this reason another approach is Inverse Reinforcement Learning (IRL) where were ask an AI to infer our values from a mass of evidence of ethical discourse and behaviour.

My guess is that this is what they are trying at OpenAI in their Superalignment project. Imagine an ethical surveillance project that uses IRL to develop a (black) moral box which can be used to train AIs to be aligned. Imagine if it could be tuned to different community ethics?

OpenAI announces Superalignment team

OpenAI has announced a Superalignment team and 4 year project to create an automated alignment researcher. They believe superintelligence (an AI more intelligent than humans) is possible within a decade and therefore we need to accelerate research into alignment. They believe developing an AI alignment researcher that is itself an AGI will give them a way to scale up and “iteratively align superintelligence.” In other words they want to set an AI to aligning more powerful AIs.

Alignment is an approach to AI safety that tries to develop AIs so they act as we would want and expect them to. The idea is to make sure that right out of the box AIs would behave in ways aligned with our values.

Needless to say, there are issues with this approach as this nice Conversation piece by Aaron Snoswell, What is ‘AI alignment’? Silicon Valley’s favourite way to think about AI safety misses the real issues, outlines.

  • First, and importantly, OpenAI has to figure out how to align an AGI so that it can tun the superintelligences to come.
  • You can’t get superalignment without alignment, and we don’t really know what that is or how to get it. There isn’t consensus as to what our values should be so an alignment would have to be to some particular ethical position.
  • Why is OpenAI focusing only on superalignment? Why not try a number of the approaches from promoting regulation to developing more ethical training datasets? How can they be so sure about one approach? What do they know that we don’t? Or … what do they think they know?
  • Snoswell believes we should start by “acknowledging and addressing existing harms”. There are plenty of immediate difficult problems that should be addressed rather than “kicking the meta-ethical can one block down the road, and hoping we don’t trip over it later on.”
  • Technical safety isn’t a problem that can be solved. It is an ongoing process of testing and refining as this Tweet from Yann LeCunn puts it.

Anyway, I wish them well. No doubt interesting research will come out of this initiative which I hope OpenAI will share. In the meantime the rest of us can carry on with the boring safety research.

OpenAI adds Code Interpreter to ChatGPT Plus

Upload datasets, generate reports, and download them in seconds!

OpenAI has just released a plug-in called Code Interpreter which is truly impressive. You need to have ChatGPT Plus to be able to turn it on. It then allows you to upload data and to use plain English to analyze it. You write requests/prompts like:

What are the top 20 content words in this text?

It then interprets your request and describes what it will try to do in Python. Then it generates the Python and runs it. When it has finished, it shows the results. You can see examples in this Medium article: 

ChatGPT’s Code Interpreter Was Just Released. Here’s How It Will Change Data Science Forever

I’ve been trying to see how I can use it to analyze a text. Here are some of the limitations:

  • It can’t handle large texts. This can be used to study a book length text, not a collection of books.
  • It frequently tries to load NLTK or other libraries and then fails. What is interesting is that it then tries other ways of achieving the same goal. For example, I asked for adjectives near the word “nature” and when it couldn’t load the NLTK POS library it then accessed a list of top adjectives in English and searched for those.
  • It can generate graphs of different sorts, but not interactives.
  • It is difficult to get the full transcript of an experiment where by “full” I mean that I want the Python code, the prompts, the responses, and any graphs generated. You can ask for a iPython notebook with the code which you can download. Perhaps I can also get a PDF with the images.

The Code Interpreter is in beta so I expect they will be improving it. It is none the less very impressive how it can translate prompts into processes. Particularly impressive is how it tries different approaches when things fail.

Code Interpreter could make data analysis and manipulation much more accessible. Without learning to code you can interrogate a data set and potentially run other processes. It is possible to imagine an unshackled Code Interpreter that could access the internet and do all sorts of things (like running a paper-clip business.)

‘It was as if my father were actually texting me’: grief in the age of AI

People are turning to chatbot impersonations of lost loved ones to help them grieve. Will AI help us live after we’re dead?

The Guardian has a thorough story about the use of AI to evoke the dead, ‘It was as if my father were actually texting me’: grief in the age of AI. The story talks about how one can train an artificial intelligence on past correspondence to mimic someone who passed. One can imagine academic uses of this where we create clones of historical figures with which to converse. Do we have enough David Hume to create an interesting AI agent?

For all the advances in medicine and technology in recent centuries, the finality of death has never been in dispute. But over the past few months, there has been a surge in the number of people sharing their stories of using ChatGPT to help say goodbye to loved ones. They raise serious questions about the rights of the deceased, and what it means to die. Is Henle’s AI mother a version of the real person? Do we have the right to prevent AI from approximating our personalities after we’re gone? If the living feel comforted by the words of an AI bot impersonation – is that person in some way still alive?

The article mentions some of the ethical quandaries:

  • Do dead people have rights? Or do others have rights related to a dead person’s image, voice, and pattern of conversation?
  • Is it healthy to interact with an AI revivification of a close relative?