AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.
This letter to AI labs follows a number of essays and opinions that maybe we are going too fast and should show restraint. This in the face of the explosive interest in large language models after ChatGPT.
Gary Marcus wrote an essay in his substack on “AI risk ≠ AGI risk” arguing that just because we don’t have AGI doesn’t mean there isn’t risk associated with the Mediocre AI systems we do have.
We have summoned an alien intelligence. We don’t know much about it, except that it is extremely powerful and offers us bedazzling gifts but could also hack the foundations of our civilization. We call upon world leaders to respond to this moment at the level of challenge it presents. The first step is to buy time to upgrade our 19th-century institutions for a post-A.I. world and to learn to master A.I. before it masters us.
For the first time, A.I.-generated personas, often used for corporate trainings, were detected in a state-aligned information campaign — opening a new chapter in online manipulation.
The New York Times has a story about how How Deepfake Videos Are Used to Spread Disinformation. The videos are actually from a service Synthesia which allows you to generate videos of talking heads from transcripts that you prepare. They have different professionally acted avatars and their technology will then generate the video of your text being presented. This is supposed to be used for quickly generating training videos (without paying actors), but someone used it for disinformation.
The editors-in-chief of Nature and Science told Nature’s news team that ChatGPT doesn’t meet the standard for authorship. “An attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs,” says Magdalena Skipper, editor-in-chief of Nature in London. Authors using LLMs in any way while developing a paper should document their use in the methods or acknowledgements sections, if appropriate, she says.
It makes sense to document use, but why would we document use of ChatGPT and not, for example, use of a library or of a research tool like Google Scholar? What about the use of ChatGPT demands that it be acknowledged?
Sinykin talks about this as an “act as groundbreaking as the research itself” which seems a bit of an exaggeration. It is important that data is being reviewed and published, but it has been happening for a while in other fields. Nonetheless, this is a welcome initiative, especially if it gets attention like the LARB article. In 2013 the Tri-Council (of research agencies in Canada) called for a culture of research data stewardship. In 2015 I worked with Sonja Sapach and Catherine Middleton on a report on a Data Management Plan Recommendation for Social Science and Humanities Funding Agencies. This looks more at the front end of requiring plans from people submitting grant proposals that are asking for funding for data-driven projects, but this was so that data could be made available for future research.
Sinykin’s essay looks at the poetry publishing culture in the US and how white it is. He shows how data can be used to study inequalities. We also need to ask about the privilege of English poetry and that of culture from the Global North. Not to mention research and research infrastructure.
Advances in AI and humanoid robotics have brought us to the threshold of a new kind of capability: creating lifelike digital renditions of the deceased.
Wired Magazine has a nice article about Why scientists are building AI avatars of the dead. The article talks about digital twin technology designed to create an avatar of a particular person that could serve as a family companion. You could have your grandfather modelled so that you could talk to him and hear his stories after he has passed.
The article also talks about the importance of the body and ideas about modelling personas with bodies. Imagine wearing motion trackers and other sensors so that your bodily presence could be modelled. Then imagine your digital twin being instantiated in a robot.
Needless to say we aren’t anywhere close yet. See this spoof video of the robot Sophia on a date with Will Smith. There are nonetheless issues about the legalities and ethics of creating bots based on people. What if one didn’t have permission from the original? Is it ethical to create a bot modelled on a historical person? a living person?
We routinely animate other people in novels, dialogue (of the dead), and in conversation. Is impersonating someone so wrong? Should people be able to control their name and likeness under all circumstances?
Then there are the possibilities for the manipulation of a digital twin or through such a twin.
As for the issue of data breaches, digital resurrection opens up a whole new can of worms. “You may share all of your feelings, your intimate details,” Hickok says. “So there’s the prospect of malicious intent—if I had access to your bot and was able to talk to you through it, I could change your attitude about things or nudge you toward certain actions, say things your loved one never would have said.”
My drawings are a reflection of my soul. What happens when artificial intelligence — and anyone with access to it — can replicate them?
Webcomic artist Sarah Andersen has written a timely Opinion for the New York Times on how The Alt-Right Manipulated My Comic. Then A.I. Claimed It.She talks about being harassed by the Alt-Right who created a shadow version of her work full of violent, racist and nazi motifs. Now she could be haunted by an AI-generated shadow like the image above. Her essay nicely captures the feeling of helplessness that many artists who survive on their work must be feeling before the “research” trick of LAION, the nonprofit arm of Stability AI that scraped copyrighted material under the cover of academic research and then made available for commercialization as Stable Diffusion.
Andersen links to a useful article on AI Data Laundering which is a good term for what researchers seem to be doing intentionally or not. What is the solution? Datasets gathered with consent? Alas too many of us, including myself, have released images on Flickr and other sites. So, as the article author Andy Baio puts it, “Asking for permission slows technological progress, but it’s hard to take back something you’ve unconditionally released into the world.”
While artists like Andersen may have no legal recourse that doesn’t make it ethical. Perhaps the academics that are doing the laundering should be called out. Perhaps we should consider boycotting such tools and hiring live artists when we have graphic design work.
Computer and video games featuring Nazi symbols such as the swastika can now be sold in Germany uncensored after a regulatory body lifted the longstanding ban.
CNN reported back in 2018 that Germany lifts ban on Nazi symbols in computer games. The game that prompted this was the counterfactual Wolfenstein II: The New Colossus that imagines the Nazis won WW II. To sell the game in Germany they had to change the symbols like the swastika as it is forbidden to display such symbols of “unconstitutional organizations.” Anyway, Germany has changed its interpretation of the law so that games are now treated as works of art like movies where it is legal to show the symbols.
This shows how difficult it can be to ban hate speech while not censoring the arts. For that matter, how does one deal with ironic hate speech – hate speech masquerading as irony?
From a paper on postcolonial computing I learned about the Unitron Mac 512: A Contraband Mac 512K from Brazil. For a while Brazil didn’t allow the importation of computers (so as to kickstart their own computer industry.) Unitron decided to reverse engineer the Mac 512K, but Apple put pressure on Brazil and the project was closed down. At least 500 machines were built and I guess some are still in circulation.
Though Apple had no intellectual property protection for the Macintosh in Brazil, the American corporation was able to pressure government and other economic actors within Brazil to reframe Unitron’s activities, once seen as nationalist and anti-colonial, as immoral piracy.
Last week I created a character on Character.AI, a new artificial tool created by some ex-Google engineers who worked on LaMDA, the language model from Google that I blogged about before.
Character.AI, which is now down for maintenance due to all the users, lets you quickly create a character and then enter into dialogue with it. It actually works quite well. I created “The Ethics Professor” and then wrote a script of questions that I used to engage the AI character. The dialogue is below.