OpenAI Changes its Core Values

An article on Semafor points out that OpenAI has changed their list of “Core Values” on their Careers page. Previously, they listed their values as being:

Audacious, Thoughtful, Unpretentious, Pragmatic & Impact-Driven, Collaborative, and Growth-oriented

Now, the list of values has been changed to:

AGI focus, Intense and scrappy, Scale, Make something people love, Team spirit

In particular, the first value reads:

AGI focus

We are committed to building safe, beneficial AGI that will have a massive positive impact on humanity’s future.

Anything that doesn’t help with that is out of scope.

This is an unambiguous change from the value of being “Audacious”, which they had glossed with “We make big bets and are unafraid to go against established norms.” They are now committed to AGI (Artificial General Intelligence) which they define on their Charter page as “highly autonomous systems that outperform humans at most economically valuable work”.

It would appear that they are committed to developing AGI that can outperform humans at work that pays and making that beneficial. I can’t help wondering why they aren’t also open to developing AGIs that can perform work that isn’t necessarily economically valuable. For that matter, what if the work AGIs can do becomes uneconomic because it can be cheaply done by an AI?

More challenging is the tension around developing AIs that can outperform humans at work that pays. How can creating AGIs that can take our work become a value? How will they make sure this is going to benefit humanity? Is this just a value in the sense of a challenge (can we make AIs that can make money?) or is there an underlying economic vision, and what would that be? I’m reminded of the ambiguous picture Ishiguro presents in Klara and the Sun of a society where only a minority of people are competitive with AIs.

Diversity Commitment

Right above the list of core values on the Careers page, there is a strong diversity statement that reads:

The development of AI must be carried out with a knowledge of and respect for the perspectives and experiences that represent the full spectrum of humanity.

This is not in the list of values, but it is designed to stand out and open the values. One wonders if this is just an afterthought or virtue signalling. Given that it is on the Careers page, it could be a warning about what they expect of applicants. “Don’t apply unless you can talk EDI!” It isn’t a commitment to diverse hirings; it is more about what they expect potential hires to know and respect.

Now, they can develop a chatbot that can test applicant’s knowledge and respect of diversity and save themselves the trouble of diversity hiring.

(Minor edits suggested by ChatGPT.)

Call for papers 2024 – Replaying Japan

Replaying Japan 2024  – The 12th International Japan Game Studies Conference – [Conference Theme] Preservation, Innovation and New Directions in Japanese Game Studies [Dates] Monday, August 19 (University at Buffalo, SUNY) Tuesday, August 20 (University at Buffalo, SUNY) Wednesday, August 21 (The Strong National Museum of Play) [Locations] University at Buffalo, SUNY (North Campus) and … Continue reading “Call for papers 2024”

The Call for Papers for Replaying Japan 2024 has just gone out. The theme is Preservation, Innovation and New Directions in Japanese Game Studies.

The conference which is being organized by Tsugumi (Mimi) Okabe at the University of Buffalo is also going to have one day at the Strong National Museum of Play in Rochester which has a fabulous collection of Japanese video game artefacts.

The conference could be considered an example of regional game studies, but Japan is hardly at the periphery of the games industry even if it is under represented in game studies as a field. It might be more accurate to describe the conference and community that has gathered around it as a inter-regional conference where people bring very different perspectives on game studies to international discussion of Japanese game culture.

History of Information Timeline

An interactive, illustrated timeline of historic moments in humankind’s quest for information. With annotations by Jeremy Norman.

History of Information is a searchable database of events in information. The link will show you the digital humanities category and what the creator thought were important events. I must say that it looks rather biased towards the interventions of white men.

Group hopes to resurrect 128-year-old Cyclorama of Jerusalem, near Quebec City

MONTREAL — The last cyclorama in Canada has been hidden from public view since it closed in 2018, but a small group of people are hoping to revive the unique…

Good News! A Group hopes to resurrect 128-year-old Cyclorama of Jerusalem, near Quebec City. The Cyclorama of Jerusalem is the last/only cyclorama still standing in Canada. I visited and blogged about it back in 2004 when I was able to visit it. Then it closed and now they are trying to restore it and sell it.

Cycloramas are the virtual reality of the 19th century. Long paintings, sometimes with props, were mounted in the round in special buildings that allowed people to feel immersed in a painted space. These remind us of the variety of types of media that have surpassed – the forgotten types of media.

The Emergence of Presentation Software and the Prehistory of PowerPoint

PowerPoint presentations have taken over the world despite Edward Tufte’s pamphlet The Cognitive Style of PowerPoint. It seems that in some contexts the “deck” has become the medium of information exchange rather than the report, paper or memo. In Slashdot I came across a link to a MIT Review essay titled, Next slide, please: A brief history of the corporate presentation. Another history is available from the Computer History Museum, Slide Logic: The Emergence of Presentation Software and the Prehistory of PowerPoint.

I remember the beginnings of computer-assisted presentations. My unit at the University of Toronto Computing Services experimented with the first tools and projectors. The three-gun projectors were finicky to set up and I felt a little guilty promoting set ups which I knew would take lots of technical support. In one presentation on digital presentations there was actually a colleague under the table making sure all the technology worked while I pitched it to faculty.

I also remember tools before PowerPoint. MORE was an outliner and thinking tool that had a presentation mode much the way Mathematica does. MORE was developed by Dave Winer who had a nice page on the history of outline processors he worked on here. It he leaves out how Douglas Engelbart’s Mother of All Demos in 1968 showed something like outlining too.

Alas, PowerPoint came to dominate though now we have a bunch of innovative presentation tools that work on the web from Google Sheets to Prezi.

Now back to Tufte. His critique still stands. Presentation tools have a cognitive style that encourages us to break complex ideas into chunks and then show one chunk at a time in a linear sequence. He points out that a well designed handout or pamphlet (like his pamphlet on The Cognitive Style of PowerPoint) can present a lot more information in a way that doesn’t hide the connections. You can have something more like a concept map that you take people through on a tour. Prezi deserves credit for paying attention to Tufte and breaking out of the linear style.

Now, of course, there are AI tools that can generate presentations like Presentations.ai or Slideoo. You can see a list of a number of them here. No need to know what you’re presenting, an AI will generate the content, design the slides, and soon present it too.

Replaying Japan 2023

Replaying Japan 2023  – The 11th International Japan Game Studies Conference – Conference Theme – Local Communities, Digital Communities and Video Games in Japan

I’m back in Canada after Replaying Japan 2023 in Nagoya Japan. I kept conference notes here for those interested. The book of abstracts is here and the programme is here. Next year will be in August at the University of Buffalo and the Strong Museum in Rochester. Some points of interest:

  • Nökkvi Jarl Bjarnason gave a talk on the emergence of national and regional game studies. What does it mean to study game culture in a country or region? How is locality appealed to in game media or games or other aspects of game culture?
  • Felania Liu presented on game preservation in China and the challenges her team faces including issues around the legitimacy of game studies.
  • Hirokazu Hamamura gave the final keynote on the evolution of game media starting with magazines and then shifting to the web.
  • I presented a paper co-written with Miki Okabe and Keiji Amano. We started with the demographic challenges faced by Japan as its population shrinks. We then looked at what Japanese Game Companies are doing to attract and support women and families. There is a work ethics that puts men and women in a bind where they are expected to work such long hours that there really isn’t any time left for “work-life balance.”

The conference was held in person at Nagoya Zokei University and brilliantly organized by Keiji Amano and Jean-Marc Pelletier. We limited online interventions to short lightning talks so there was good attendance.

The AP lays the groundwork for an AI-assisted newsroom

The Associated Press published standards today for generative AI use in its newsroom.

As we deal with we deal with the changes brought about by this recent generation of chatbots in the academy we could learn from guidelines emerging from other fields like journalism. Endgadget reports that  The AP lays the groundwork for an AI-assisted newsroom and you can see the Associated Press guidelines here.

Accuracy, fairness and speed are the guiding values for AP’s news report, and we believe the mindful use of artificial intelligence can serve these values and over time improve how we work.

AP also suggests they don’t see chatbots replacing journalists any time soon as the “the central role of the AP journalist – gathering, evaluating and ordering facts into news stories, video, photography and audio for our members and customers – will not change.”

It should be noted (as AP does) that they have an agreement with OpenAI.

‘New York Times’ considers legal action against OpenAI as copyright tensions swirl : NPR

The news publisher and maker of ChatGPT have held tense negotiations over striking a licensing deal for the use of the paper’s articles to train the chatbot. Now, legal action is being considered.

Finally we are seeing a serious challenge to the way AI companies are exploiting written resources on the web as the New York Times engaged Open AI,  ‘New York Times’ considers legal action against OpenAI as copyright tensions swirl.

A top concern for the Times is that ChatGPT is, in a sense, becoming a direct competitor with the paper by creating text that answers questions based on the original reporting and writing of the paper’s staff.

It remains to be seen what the legalities are. Does using a text in order to train a model constitute the making of a copy in violation of copyright? Does the model contain something equivalent to a copy of the original? These issues are being explored in the AI image generating space where Stability AI is being sued by Getty Images. I hope the New York Times doesn’t just settle quietly before there is a public airing of the issues around the exploitation/ownership of written work. I also note that the Author’s Guild is starting to advocate on behalf of authors,

“It says it’s not fair to use our stuff in your AI without permission or payment,” said Mary Rasenberger, CEO of The Author’s Guild. The non-profit writers’ advocacy organization created the letter, and sent it out to the AI companies on Monday. “So please start compensating us and talking to us.”

This could also have repercussions in academia as many of us scrape the web and social media when studying contemporary issues. For that matter what do we think about the use of our work? One could say that our work, supported as it is by the public, should be fair game from gathering, training and innovative reuse. Aren’t we supported for the public good? Perhaps we should assert that academic prose is available for training models?

What are our ethics?

Worldcoin ignored initial order to stop iris scans in Kenya, records show

The Office of the Data Protection Commissioner in Kenya first instructed Worldcoin to stop collecting personal data in May.

I don’t know what to think about Worldcoin. Is it one more crypto project doomed to disappear or could it be a nasty exploitive project designed to corner identity by starting starting in Kenya. Imagine having to get orbed just to use local government services online! Fortunately Kenya is now ordering them to stop their exploitation; see the TechCrunch story, Worldcoin ignored initial order to stop iris scans in Kenya, records show.

The Illusion Of AI’s Existential Risk

In sum, AI acting on its own cannot induce human extinction in any of the ways that extinctions have happened in the past. Appeals to the competitive nature of evolution or previous instances of a more intelligent species causing the extinction of a less intelligent species reflect a common mischaracterization of evolution by natural selection.

Could artificial intelligence (AI) soon get to the point where it could enslave us? An Amii colleague sent me to this sensible article, The Illusion Of AI’s Existential Risk that argues that it is extremely unlikely that an AI could evolve to the point where it could manipulate us and prevent us from turning it off. One of the points they make is that the situation is completely different from past extinctions.

Our safety is the topic of Brian Christian’s excellent The Alignment Problem book which talks about different approaches to developing AIs so they are aligned with our values. An important point made by Stuart Russell and quoted in the book is that we don’t want AIs to have the same values as us, we want them to value our having values and to pay attention to our values.

This raises the question of how an AI might know what we value. One approach is Constitutional AI where we train ethical AIs on a constitution that captures our values and then use it to model others.

One of the problems, however, with ethics is that human ethics isn’t simple and may not be something one can capture in a constitution. For this reason another approach is Inverse Reinforcement Learning (IRL) where were ask an AI to infer our values from a mass of evidence of ethical discourse and behaviour.

My guess is that this is what they are trying at OpenAI in their Superalignment project. Imagine an ethical surveillance project that uses IRL to develop a (black) moral box which can be used to train AIs to be aligned. Imagine if it could be tuned to different community ethics?