Auto-GPT

An experimental open-source attempt to make GPT-4 fully autonomous. – Auto-GPT/README.md at master · Torantulino/Auto-GPT

From a video on 3 Quarks Daily on whether ChatGPT can prompt itself I discovered, Auto-GPT. Auto-GPT is powered by GPT-4. You can describe a mission and it will try to launch tasks, assess them, and complete the mission. Needless to say it was inevitable that someone would find a way to use ChatGPT or one of its relatives to try to complete complicated jobs including taking over the world, as Chaos-GPT claims to want to do (using Auto-GPT.)

How long will it be before someone figures out how to use these tools to do something truly nasty? I give it about 6 months before we get stories of generative AI being used to systematically harass people, or find information on how to harm people, or find ways to waste resources like the paperclip maximizer. Is it surprising that governments like Italy have banned ChatGPT?

 

U of A computing scientists work with Japanese researchers on virtual reality game to get people out of their seats

U of A computing scientists work with Japanese researchers to refine a virtual and mixed reality video game that can improve motor skills for older adults and sedentary people.

The Folio of the University of Alberta published a story about a trip to Japan that I and others embarked on, U of A computing scientists work with Japanese researchers on virtual reality game to get people out of their seats. Ritsumeikan invited us to develop research collaborations around gaming, language and artificial intelligence. Our visit was a chance to further the collaborations, like the one my colleagues Eleni Stroulia and Victor Fernandez Cervantes are developing with Thawmas Ruck around games for older adults. This inter-university set of collaborations build on projects I was involved in going back to 2011, including a conference (Replaying Japan) and a journal, the Journal of Replaying Japan.

The highlight was the signing of a Memorandum Of Understanding by the two presidents (of U of A and Ritsumeikan). I was also involved as was Professor Nakamura. May the collaboration thrive.

Pause Giant AI Experiments: An Open Letter

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

The Future of Life Institute is calling on AI labs to pause with a letter signed by over 1000 people (including myself), Pause Giant AI Experiments: An Open Letter – Future of Life Institute. The letter asks for a pause so that safety protocols can be developed,

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.

This letter to AI labs follows a number of essays and opinions that maybe we are going too fast and should show restraint. This in the face of the explosive interest in large language models after ChatGPT.

  • Gary Marcus wrote an essay in his substack on “AI risk ≠ AGI risk” arguing that just because we don’t have AGI doesn’t mean there isn’t risk associated with the Mediocre AI systems we do have.
  • Yuval Noah Harari has an opinion in the New York Times with the title, “You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills” where he talks about the dangers of AIs manipulating culture.

We have summoned an alien intelligence. We don’t know much about it, except that it is extremely powerful and offers us bedazzling gifts but could also hack the foundations of our civilization. We call upon world leaders to respond to this moment at the level of challenge it presents. The first step is to buy time to upgrade our 19th-century institutions for a post-A.I. world and to learn to master A.I. before it masters us.

It is worth wondering whether the letter will have an effect, and if it doesn’t, why we can’t collectively slow down and safely explore AI.

Signing of MOU

See https://twitter.com/PTJCUA1/status/1630853467605721089

Yesterday I was part of a signing ceremony for a Memorandum of Agreement between Ritsumeikan University and the University of Alberta. I and the President of the University of Alberta (Bill Flanagan) signed on behalf of U of A. The MOU described our desire to build on our collaborations around Replaying Japan. We hope to build collaborations around artificial intelligence, games, learning, and digital humanities. KIAS and the AI4Society signature area have been supporting this research collaboration.

Today (March 2nd, 2023) we are having a short conference at Ritsumeikan that included a panel about our collaboration, at which I talked, and a showcase of research in game studies at Ritsumeikan.

How Deepfake Videos Are Used to Spread Disinformation – The New York Times

For the first time, A.I.-generated personas, often used for corporate trainings, were detected in a state-aligned information campaign — opening a new chapter in online manipulation.

The New York Times has a story about how How Deepfake Videos Are Used to Spread Disinformation. The videos are actually from a service Synthesia which allows you to generate videos of talking heads from transcripts that you prepare. They have different professionally acted avatars and their technology will then generate the video of your text being presented. This is supposed to be used for quickly generating training videos (without paying actors), but someone used it for disinformation.

Destroy All Monsters

There has recently been some fuss around the change in the Open Gaming License of Dungeons & Dragons. So here is a nice story about D&D and its history, Destroy All Monsters.

D&D is a game for people who like rules: in order to play even the basic game, you had to make sense of roughly twenty pages of instructions, which cover everything from “Adjusting Ability Scores” (“Magic-users and clerics can reduce their strength scores by 3 points and add 1 to their prime requisite”) to “Who Gets the First Blow?” (“The character with the highest dexterity strikes first”). In fact, as I wandered farther into the cave, and acquired the rulebooks for Advanced Dungeons & Dragons, I found that there were rules for everything: … It would be a mistake to think of these rules as an impediment to enjoying the game. Rather, the rules are a necessary condition for enjoying the game, and this is true whether you play by them or not. The rules induct you into the world of D&D; they are the long, difficult scramble from the mouth of the cave to the first point where you can stand up and look around.

ChatGPT listed as author on research papers: many scientists disapprove

The editors-in-chief of Nature and Science told Nature’s news team that ChatGPT doesn’t meet the standard for authorship. “An attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs,” says Magdalena Skipper, editor-in-chief of Nature in London. Authors using LLMs in any way while developing a paper should document their use in the methods or acknowledgements sections, if appropriate, she says.

We are beginning to see interesting ethical issues crop up regarding the new LLMs (Large Language Models) like ChatGPT. For example, Nature has an news article, ChatGPT listed as author on research papers: many scientists disapprove,.

It makes sense to document use, but why would we document use of ChatGPT and not, for example, use of a library or of a research tool like Google Scholar? What about the use of ChatGPT demands that it be acknowledged?

Why scientists are building AI avatars of the dead | WIRED Middle East

Advances in AI and humanoid robotics have brought us to the threshold of a new kind of capability: creating lifelike digital renditions of the deceased.

Wired Magazine has a nice article about Why scientists are building AI avatars of the dead. The article talks about digital twin technology designed to create an avatar of a particular person that could serve as a family companion. You could have your grandfather modelled so that you could talk to him and hear his stories after he has passed.

The article also talks about the importance of the body and ideas about modelling personas with bodies. Imagine wearing motion trackers and other sensors so that your bodily presence could be modelled. Then imagine your digital twin being instantiated in a robot.

Needless to say we aren’t anywhere close yet. See this spoof video of the robot Sophia on a date with Will Smith. There are nonetheless issues about the legalities and ethics of creating bots based on people. What if one didn’t have permission from the original? Is it ethical to create a bot modelled on a historical person? a living person?

We routinely animate other people in novels, dialogue (of the dead), and in conversation. Is impersonating someone so wrong? Should people be able to control their name and likeness under all circumstances?

Then there are the possibilities for the manipulation of a digital twin or through such a twin.

As for the issue of data breaches, digital resurrection opens up a whole new can of worms. “You may share all of your feelings, your intimate details,” Hickok says. “So there’s the prospect of malicious intent—if I had access to your bot and was able to talk to you through it, I could change your attitude about things or nudge you toward certain actions, say things your loved one never would have said.”

 

How AI image generators work, like DALL-E, Lensa and stable diffusion

Use our simulator to learn how AI generates images from “noise.”

The Washington Post has a nice explainer on how text to image generators work: How AI image generators work, like DALL-E, Lensa and stable diffusion. They let you play with the generator, though you have to stick with the predefined phrases. What I hadn’t realized was the role of static noise in the diffusion model. Not sure how it works, but it seems to train the AI to recognize and then generate in noisy images.

From Bitcoin to Stablecoin: Crypto’s history is a house of cards

The wild beginnings, crazy turns, colorful characters and multiple comebacks of the crypto world

The Washington Post has a nice illustrated history of crypto, From Bitcoin to Stablecoin: Crypto’s history is a house of cards. They use a deck of cards as a visual metaphor and a graph of the ups and downs of crypto. I can’t help thinking that crypto is going to go up again, but when and in what form?

For that matter, where is Ruja Ignatova?