He Created the Katamari Games, but They’re Rolling On Without Him – The New York Times

The New York Times has a nice story about Keita Takahashi. He Created the Katamari Games, but They’re Rolling On Without Him. Like many Japanese game designers he gets no royalties and has little say in the future of the game associated with him, Katamari Damacy.

The game itself is collection game where you roll a ever growing ball of things that you might see in a typical Japanese house. The balls will allow a prince to rebuild the stars accidentally destroyed by his father, King of All Cosmos. (The image above is of Takahashi as the King.) Rachael Hutchinson has a chapter in her book Japanese Culture Through Videogames about the game and Japan.

Takahashi has a new game coming out soon, to a T.

 

OpenAI announces Superalignment team

OpenAI has announced a Superalignment team and 4 year project to create an automated alignment researcher. They believe superintelligence (an AI more intelligent than humans) is possible within a decade and therefore we need to accelerate research into alignment. They believe developing an AI alignment researcher that is itself an AGI will give them a way to scale up and “iteratively align superintelligence.” In other words they want to set an AI to aligning more powerful AIs.

Alignment is an approach to AI safety that tries to develop AIs so they act as we would want and expect them to. The idea is to make sure that right out of the box AIs would behave in ways aligned with our values.

Needless to say, there are issues with this approach as this nice Conversation piece by Aaron Snoswell, What is ‘AI alignment’? Silicon Valley’s favourite way to think about AI safety misses the real issues, outlines.

  • First, and importantly, OpenAI has to figure out how to align an AGI so that it can tun the superintelligences to come.
  • You can’t get superalignment without alignment, and we don’t really know what that is or how to get it. There isn’t consensus as to what our values should be so an alignment would have to be to some particular ethical position.
  • Why is OpenAI focusing only on superalignment? Why not try a number of the approaches from promoting regulation to developing more ethical training datasets? How can they be so sure about one approach? What do they know that we don’t? Or … what do they think they know?
  • Snoswell believes we should start by “acknowledging and addressing existing harms”. There are plenty of immediate difficult problems that should be addressed rather than “kicking the meta-ethical can one block down the road, and hoping we don’t trip over it later on.”
  • Technical safety isn’t a problem that can be solved. It is an ongoing process of testing and refining as this Tweet from Yann LeCunn puts it.

Anyway, I wish them well. No doubt interesting research will come out of this initiative which I hope OpenAI will share. In the meantime the rest of us can carry on with the boring safety research.

OpenAI adds Code Interpreter to ChatGPT Plus

Upload datasets, generate reports, and download them in seconds!

OpenAI has just released a plug-in called Code Interpreter which is truly impressive. You need to have ChatGPT Plus to be able to turn it on. It then allows you to upload data and to use plain English to analyze it. You write requests/prompts like:

What are the top 20 content words in this text?

It then interprets your request and describes what it will try to do in Python. Then it generates the Python and runs it. When it has finished, it shows the results. You can see examples in this Medium article: 

ChatGPT’s Code Interpreter Was Just Released. Here’s How It Will Change Data Science Forever

I’ve been trying to see how I can use it to analyze a text. Here are some of the limitations:

  • It can’t handle large texts. This can be used to study a book length text, not a collection of books.
  • It frequently tries to load NLTK or other libraries and then fails. What is interesting is that it then tries other ways of achieving the same goal. For example, I asked for adjectives near the word “nature” and when it couldn’t load the NLTK POS library it then accessed a list of top adjectives in English and searched for those.
  • It can generate graphs of different sorts, but not interactives.
  • It is difficult to get the full transcript of an experiment where by “full” I mean that I want the Python code, the prompts, the responses, and any graphs generated. You can ask for a iPython notebook with the code which you can download. Perhaps I can also get a PDF with the images.

The Code Interpreter is in beta so I expect they will be improving it. It is none the less very impressive how it can translate prompts into processes. Particularly impressive is how it tries different approaches when things fail.

Code Interpreter could make data analysis and manipulation much more accessible. Without learning to code you can interrogate a data set and potentially run other processes. It is possible to imagine an unshackled Code Interpreter that could access the internet and do all sorts of things (like running a paper-clip business.)

‘It was as if my father were actually texting me’: grief in the age of AI

People are turning to chatbot impersonations of lost loved ones to help them grieve. Will AI help us live after we’re dead?

The Guardian has a thorough story about the use of AI to evoke the dead, ‘It was as if my father were actually texting me’: grief in the age of AI. The story talks about how one can train an artificial intelligence on past correspondence to mimic someone who passed. One can imagine academic uses of this where we create clones of historical figures with which to converse. Do we have enough David Hume to create an interesting AI agent?

For all the advances in medicine and technology in recent centuries, the finality of death has never been in dispute. But over the past few months, there has been a surge in the number of people sharing their stories of using ChatGPT to help say goodbye to loved ones. They raise serious questions about the rights of the deceased, and what it means to die. Is Henle’s AI mother a version of the real person? Do we have the right to prevent AI from approximating our personalities after we’re gone? If the living feel comforted by the words of an AI bot impersonation – is that person in some way still alive?

The article mentions some of the ethical quandaries:

  • Do dead people have rights? Or do others have rights related to a dead person’s image, voice, and pattern of conversation?
  • Is it healthy to interact with an AI revivification of a close relative?

 

40 years of the Nintendo Famicom – the console that changed the games industry

Entering a crowded field, the Nintendo Famicom came to dominate the market in the 1980s, leaving a family orientated legacy that continues to be felt today

The Guardian has a good story on the 40th anniversary of the Nintendo Famicom, 40 years of the Nintendo Famicom – the console that changed the games industry The story quotes James Newman and also mentions Masayuki Uemura who Newman and I knew through the Replaying Japan conferences. Alas, Uemura, who was at Ritsumeikan after he retired from Nintendo, passed in 2021.

The story points out how Nintendo deliberately promoted the Famicom as a family machine that could be hooked up to the family TV (hence “Fami – com.) In various ways they wanted to legitimize gaming as a family experience. By contrast, when Nintendo brought the machine to North America it was remodelled to look like a VCR and called the Nintendo Entertainment System.

Female Experience Simulator

I recently played the Female Experience Simulator after reading about it in the thesis of a student. It is a “text adventure” where you choose your wardrobe and then go somewhere. Inevitably you get harassed. The lesson of the game is,

Did you think that maybe if you changed your clothes or avoided certain places that you could avoid being harassed?

Yeah, it doesn’t work like that.

Welcome to life as a woman.

 

How Canada Accidentally Helped Crack Computer Translation

A technological whodunit—featuring Parliament, computer scientists, and a tipsy plane flight

Arun sent me a link to a neat story about How Canada Accidentally Helped Crack Computer Translation. The story is by Christine Mitchell and is in the Walrus (June 2023). It describes how IBM got ahold of a magnetic reel tape with 14 years of the Hansard – the translated transcripts of the Canadian Parliament. IBM went on to use this data trove to make advances in automatic translation.

The story mentions the politics of automated translation research in Canada. I have previously blogged about the Booths who were recruited by the NRC to Saskatchewan to work on automated translation. They were apparently pursuing a statistical approach like that IBM took later on, but their funding was cut.

Speaking of automatic translation, Canada had a computerized system, METEO for translating daily weather forecasts from Environment Canada. This ran from 1981 to 2001 and was an early successful implementation of automatic translation in the real world. It came out of work at the TAUM (Traduction Automatique à l’Université de Montréal) research group at the Université de Montréal that was set up in the late 1960s.

The case for taking AI seriously as a threat to humanity

From the Open Philanthropy site I came across this older (2020) Vox article, The case for taking AI seriously as a threat to humanity by Kelsey Piper. The article nicely summarizes some of the history of concerns around AGI (Artificial General Intelligence) as people tend to call an AI so advanced it might be comparable to human intelligence. This history goes back to Turing’s colleague I.J. Good who speculated in 1965 that,

An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

Such an explosion has been called the Singularity by Vernor Vinge and was popularized by Ray Kurzweil.

I came across this following threads on the whole issue of whether AI would soon become an existential threat. The question of the dangers of AI (whether AGI (Artificial General Intelligence) or just narrow AI) has gotten a lot of attention especially since Geoffrey Hinton ended his relationship with Google so he could speak about it. He and other signed a short statement published on the site of the Center for AI Safety,

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

The existential question only become relevant if one believes, as many do, that there is considerable risk that AI research and development is moving so fast that it may soon achieve some level of generality at which point such an AGI could begin act in unpredictable and dangerous ways. Alternatively people could misuse such powerful AGIs to harm us. Open Philanthropy is one group that is focused on Potential Risks form Advanced AI. They could be classed as an organization with a longtermism view, a view that it is important to ethics (and philanthropy) to consider long-term issues.

Advances in AI could lead to extremely positive developments, but could also potentially pose risks from intentional misuse or catastrophic accidents.

Others have called for a Manhattan Project for AI Safety. There are, of course, those (including me) that feel that this is distracting from the immediate unintended effects of AI and/or that there is little existential danger for the moment as AGI is decades off. The cynic in my also wonders how much the distraction is intentional as it both hypes the technology (its dangerous therefore it must be important) or justifies ignoring the stubborn immediate problems like racist bias in the training data.

Kelsey Piper has in the meantime published A Field Guide to AI Safety.

The question still remains whether AI is dangerous enough to merit the sort of ethical attention that nuclear power, for example, has recieved.

Jeff Pooley, “Surveillance Publishing”

Arun sent me the link to a good paper by Jeff Pooley on Surveillance Publishing in the Journal of Electronic Publishing. The article compares what Google does to rank pages based on links to citation analysis (which inspired Brin and Page). The article looks at how both web search and citation analysis have been monetized by Google and citation network services like Web of Science. Now publishing companies like Elsevier make money off tools that report and predict on publishing. We write papers with citations and publish them. Then we buy services built on our citational work and administrators buy services telling them who publishes the most and where the hot areas are. As Pooley puts it,

Siphoning taxpayer, tuition, and endowment dollars to access our own behavior is a financial and moral indignity.

The article also points out that predictive services have been around since before Google. The insurance and credit rating businesses have used surveillance for some time.

Pooley ends by talking about how these publication surveillance tools then encourage quantification of academic work and facilitate local and international prioritization. The Anglophone academy measures things and discovers itself so it can then reward itself. What gets lost is the pursuit of knowledge.

In that sense, the “decision tools” peddled by surveillance publishers are laundering machines—context-erasing abstractions of our messy academic realities.

The full abstract is here:

This essay develops the idea of surveillance publishing, with special attention to the example of Elsevier. A scholarly publisher can be defined as a surveillance publisher if it derives a substantial proportion of its revenue from prediction products, fueled by data extracted from researcher behavior. The essay begins by tracing the Google search engine’s roots in bibliometrics, alongside a history of the citation analysis company that became, in 2016, Clarivate. The essay develops the idea of surveillance publishing by engaging with the work of Shoshana Zuboff, Jathan Sadowski, Mariano-Florentino Cuéllar, and Aziz Huq. The recent history of Elsevier is traced to describe the company’s research-lifecycle data-harvesting strategy, with the aim to develop and sell prediction products to unviersity and other customers. The essay concludes by considering some of the potential costs of surveillance publishing, as other big commercial publishers increasingly enter the predictive-analytics business. It is likely, I argue, that windfall subscription-and-APC profits in Elsevier’s “legacy” publishing business have financed its decade-long acquisition binge in analytics. The products’ purpose, moreover, is to streamline the top-down assessment and evaluation practices that have taken hold in recent decades. A final concern is that scholars will internalize an analytics mindset, one already encouraged by citation counts and impact factors.

Source: Pooley | Surveillance Publishing | The Journal of Electronic Publishing