The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023

Today and tomorrow representatives from a number of countries have gathered at Bletchley Park to discuss AI safety. Close to 30 countries, including Canada were represented and they issued The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023. This declaration starts with,

Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity. To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible.

The declaration discusses opportunities and the need to support innovation, but also mentions that “AI also poses significant risks” and mentions the usual suspects, especially “capable, general-purpose models” that could be repurposed for misuse.

What stands out is the commitment to international collaboration among the major players, including China. This is a good sign.

Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation. We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe, and supports the good of all through existing international fora and other relevant initiatives, to promote cooperation to address the broad range of risks posed by AI.

Bletchley Park is becoming a UK symbol of computing. It was, of course, where the Allied code-breaking centre was set up. It is where Turing worked on the Colossus, an important early computer used to decode the German ciphers and give the Allies a crucial advantage. It is appropriate that UK Prime Minister Sunak has used this site to gather representatives. Unfortunately few leaders joined him there, sending representatives instead, though Trudeau may show up on the 2nd.

Alas, the Declaration is short on specifics though individual countries like the United States and Canada are securing voluntary commitments from players to abide by codes of conduct. China and the EU are also passing laws regulating artificial intelligence.

One thing not mentioned at all are the dangers of military uses of AI. It is as if warbots are off the table in AI safety discussions.

The good news is that there will be follow up meetings at which we can hope that concrete agreements might be worked out.

 

 

 

Lit sounds: U of A experts help rescue treasure trove of audio cultural history

A U of A professor is helping to rescue tens of thousands of lost audio and video recordings — on tape, film, vinyl or any other bygone media — across North America.

The Folio has a nice story about the SpokenWeb project that I am part of, Lit sounds: U of A experts help rescue treasure trove of audio cultural history. The article discusses the collaboration and importance of archiving to scholarship.

A Mirror Test for AI

I recently tried a version of a Mirror Test on ChatGPT 4. I asked it “What is life?” and then for each answer I copied the first sentence or two and put it back into the prompt. I wanted to see if ChatGPT was aware that I was parroting their answers back to them. Alas not … it just kept churning through various answers. The openings of its answers were, in order of appearance:

The question “What is life?” has been pondered by philosophers, scientists, theologians, and thinkers of all disciplines for millennia. …

Indeed, the question of “What is life?” is one of the most profound and enduring inquiries in human history. …

Absolutely. The quest to define and understand life is deeply embedded in the human psyche and is reflected in our art, science, philosophy, and spirituality. …

It didn’t repeat itself, but it didn’t ask me why I was repeating what it said. Obviously it fails the Mirror Test.

 

 

Artificial General Intelligence Is Already Here

Today’s most advanced AI models have many flaws, but decades from now, they will be recognized as the first true examples of artificial general intelligence.

Blaise Agüera Y Arcas and Peter Norvig have an essay making the argument that  Artificial General Intelligence Is Already Here. Their point is that the latest machines like ChatGPT are far more general than previous narrow AIs. They may not be as general as a human, at least without embodiment, but they can do all sorts of textual tasks including tasks not deliberately programmed into them. Some of the ways they are general include their ability to deal with all sorts of topics, their ability to do different types of tasks, their ability to deal with different modalities (images, text …), their language ability, and instructability.

The article also mentions reasons why people are still reluctant to admit that we have a form of AGI:

  • “A healthy skepticism about metrics for AGI

  • An ideological commitment to alternative AI theories or techniques

  • A devotion to human (or biological) exceptionalism

  • A concern about the economic implications of AGI”

To some extent the goal post changes as AI’s solve different challenges. We used to think playing chess well was a sign of intelligence, now that we know how a computer can do it, it no longer seems a test of intelligence.

 

AI Has Already Taken Over. It’s Called the Corporation

If corporations were in fact real persons, they would be sociopaths, completely lacking the ability for empathy that is a crucial element of normal human behavior. Unlike humans, however, corporations are theoretically immortal, cannot be put in prison, and the larger multinationals are not constrained by the laws of any individual country.

Jeremy Lent has an essay arguing that AI Has Already Taken Over. It’s Called the Corporation. He isn’t the only one making this point. Indrajit (Indi) Samarajiva has a Medium essay on Corporations Are Already AI that corporations are legally artificial people with many of the rights of people. They can own property (including people), they have agency, they communicate, and they have intelligence. Just because they aren’t software running on a computer doesn’t make them artificial intelligences.

As Samarajiva points out, it would be interesting to review the history of the corporation looking at examples like the Dutch East India Company to see if we can understand how AGIs might also emerge and interact with us. He feels that Corporate AIs hate us or at least are indifferent.

Another essay that also touches on this is a diary entry by David Runciman on AI in the London Review of Books. His reflections on how our fears about AI mirror earlier fears about corporations are worth quoting in full,

Just as adult human beings are not the only model for natural intelligence – along with children, we heard about the intelligence of plants and animals – computers are not the only model for intelligence of the artificial kind. Corporations are another form of artificial thinking machine, in that they are designed to be capable of taking decisions for themselves. Information goes in and decisions come out that cannot be reduced to the input of individual human beings. The corporation speaks and acts for itself. Many of the fears that people now have about the coming age of intelligent robots are the same ones they have had about corporations for hundreds of years. If these artificial creatures are taking decisions for us, how can we hold them to account for what they do? In the words of the 18th-century jurist Edward Thurlow, ‘corporations have neither bodies to be punished nor souls to be condemned; they may therefore do as they like.’ We have always been fearful of mechanisms that ape the mechanical side of human intelligence without the natural side. We fear that they lack a conscience. They can think for themselves, but they don’t really understand what it is that they are doing.

OpenAI Changes its Core Values

An article on Semafor points out that OpenAI has changed their list of “Core Values” on their Careers page. Previously, they listed their values as being:

Audacious, Thoughtful, Unpretentious, Pragmatic & Impact-Driven, Collaborative, and Growth-oriented

Now, the list of values has been changed to:

AGI focus, Intense and scrappy, Scale, Make something people love, Team spirit

In particular, the first value reads:

AGI focus

We are committed to building safe, beneficial AGI that will have a massive positive impact on humanity’s future.

Anything that doesn’t help with that is out of scope.

This is an unambiguous change from the value of being “Audacious”, which they had glossed with “We make big bets and are unafraid to go against established norms.” They are now committed to AGI (Artificial General Intelligence) which they define on their Charter page as “highly autonomous systems that outperform humans at most economically valuable work”.

It would appear that they are committed to developing AGI that can outperform humans at work that pays and making that beneficial. I can’t help wondering why they aren’t also open to developing AGIs that can perform work that isn’t necessarily economically valuable. For that matter, what if the work AGIs can do becomes uneconomic because it can be cheaply done by an AI?

More challenging is the tension around developing AIs that can outperform humans at work that pays. How can creating AGIs that can take our work become a value? How will they make sure this is going to benefit humanity? Is this just a value in the sense of a challenge (can we make AIs that can make money?) or is there an underlying economic vision, and what would that be? I’m reminded of the ambiguous picture Ishiguro presents in Klara and the Sun of a society where only a minority of people are competitive with AIs.

Diversity Commitment

Right above the list of core values on the Careers page, there is a strong diversity statement that reads:

The development of AI must be carried out with a knowledge of and respect for the perspectives and experiences that represent the full spectrum of humanity.

This is not in the list of values, but it is designed to stand out and open the values. One wonders if this is just an afterthought or virtue signalling. Given that it is on the Careers page, it could be a warning about what they expect of applicants. “Don’t apply unless you can talk EDI!” It isn’t a commitment to diverse hirings; it is more about what they expect potential hires to know and respect.

Now, they can develop a chatbot that can test applicant’s knowledge and respect of diversity and save themselves the trouble of diversity hiring.

(Minor edits suggested by ChatGPT.)

Call for papers 2024 – Replaying Japan

Replaying Japan 2024  – The 12th International Japan Game Studies Conference – [Conference Theme] Preservation, Innovation and New Directions in Japanese Game Studies [Dates] Monday, August 19 (University at Buffalo, SUNY) Tuesday, August 20 (University at Buffalo, SUNY) Wednesday, August 21 (The Strong National Museum of Play) [Locations] University at Buffalo, SUNY (North Campus) and … Continue reading “Call for papers 2024”

The Call for Papers for Replaying Japan 2024 has just gone out. The theme is Preservation, Innovation and New Directions in Japanese Game Studies.

The conference which is being organized by Tsugumi (Mimi) Okabe at the University of Buffalo is also going to have one day at the Strong National Museum of Play in Rochester which has a fabulous collection of Japanese video game artefacts.

The conference could be considered an example of regional game studies, but Japan is hardly at the periphery of the games industry even if it is under represented in game studies as a field. It might be more accurate to describe the conference and community that has gathered around it as a inter-regional conference where people bring very different perspectives on game studies to international discussion of Japanese game culture.

History of Information Timeline

An interactive, illustrated timeline of historic moments in humankind’s quest for information. With annotations by Jeremy Norman.

History of Information is a searchable database of events in information. The link will show you the digital humanities category and what the creator thought were important events. I must say that it looks rather biased towards the interventions of white men.

Group hopes to resurrect 128-year-old Cyclorama of Jerusalem, near Quebec City

MONTREAL — The last cyclorama in Canada has been hidden from public view since it closed in 2018, but a small group of people are hoping to revive the unique…

Good News! A Group hopes to resurrect 128-year-old Cyclorama of Jerusalem, near Quebec City. The Cyclorama of Jerusalem is the last/only cyclorama still standing in Canada. I visited and blogged about it back in 2004 when I was able to visit it. Then it closed and now they are trying to restore it and sell it.

Cycloramas are the virtual reality of the 19th century. Long paintings, sometimes with props, were mounted in the round in special buildings that allowed people to feel immersed in a painted space. These remind us of the variety of types of media that have surpassed – the forgotten types of media.

The Emergence of Presentation Software and the Prehistory of PowerPoint

PowerPoint presentations have taken over the world despite Edward Tufte’s pamphlet The Cognitive Style of PowerPoint. It seems that in some contexts the “deck” has become the medium of information exchange rather than the report, paper or memo. In Slashdot I came across a link to a MIT Review essay titled, Next slide, please: A brief history of the corporate presentation. Another history is available from the Computer History Museum, Slide Logic: The Emergence of Presentation Software and the Prehistory of PowerPoint.

I remember the beginnings of computer-assisted presentations. My unit at the University of Toronto Computing Services experimented with the first tools and projectors. The three-gun projectors were finicky to set up and I felt a little guilty promoting set ups which I knew would take lots of technical support. In one presentation on digital presentations there was actually a colleague under the table making sure all the technology worked while I pitched it to faculty.

I also remember tools before PowerPoint. MORE was an outliner and thinking tool that had a presentation mode much the way Mathematica does. MORE was developed by Dave Winer who had a nice page on the history of outline processors he worked on here. It he leaves out how Douglas Engelbart’s Mother of All Demos in 1968 showed something like outlining too.

Alas, PowerPoint came to dominate though now we have a bunch of innovative presentation tools that work on the web from Google Sheets to Prezi.

Now back to Tufte. His critique still stands. Presentation tools have a cognitive style that encourages us to break complex ideas into chunks and then show one chunk at a time in a linear sequence. He points out that a well designed handout or pamphlet (like his pamphlet on The Cognitive Style of PowerPoint) can present a lot more information in a way that doesn’t hide the connections. You can have something more like a concept map that you take people through on a tour. Prezi deserves credit for paying attention to Tufte and breaking out of the linear style.

Now, of course, there are AI tools that can generate presentations like Presentations.ai or Slideoo. You can see a list of a number of them here. No need to know what you’re presenting, an AI will generate the content, design the slides, and soon present it too.