For the first time, A.I.-generated personas, often used for corporate trainings, were detected in a state-aligned information campaign — opening a new chapter in online manipulation.
The New York Times has a story about how How Deepfake Videos Are Used to Spread Disinformation. The videos are actually from a service Synthesia which allows you to generate videos of talking heads from transcripts that you prepare. They have different professionally acted avatars and their technology will then generate the video of your text being presented. This is supposed to be used for quickly generating training videos (without paying actors), but someone used it for disinformation.
With the success of OpenAI’s ChatGPT the other big AI companies are teasing us with their AIs. For example, Verge tells us that Google’s new AI turns text into music. Alas Google doesn’t seem to want to let us play with the AI, just admire the results.
D&D is a game for people who like rules: in order to play even the basic game, you had to make sense of roughly twenty pages of instructions, which cover everything from “Adjusting Ability Scores” (“Magic-users and clerics can reduce their strength scores by 3 points and add 1 to their prime requisite”) to “Who Gets the First Blow?” (“The character with the highest dexterity strikes first”). In fact, as I wandered farther into the cave, and acquired the rulebooks for Advanced Dungeons & Dragons, I found that there were rules for everything: … It would be a mistake to think of these rules as an impediment to enjoying the game. Rather, the rules are a necessary condition for enjoying the game, and this is true whether you play by them or not. The rules induct you into the world of D&D; they are the long, difficult scramble from the mouth of the cave to the first point where you can stand up and look around.
It seems to me that this is related to the various activities that were staged in Second Life and other environments. It also has connections to the Machinima phenomenon where people use 3D environments like games to stage acts that are filmed.
Of course the problem with Fallout 76 is the performers can get attacked during a performance.
The editors-in-chief of Nature and Science told Nature’s news team that ChatGPT doesn’t meet the standard for authorship. “An attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs,” says Magdalena Skipper, editor-in-chief of Nature in London. Authors using LLMs in any way while developing a paper should document their use in the methods or acknowledgements sections, if appropriate, she says.
It makes sense to document use, but why would we document use of ChatGPT and not, for example, use of a library or of a research tool like Google Scholar? What about the use of ChatGPT demands that it be acknowledged?
Sinykin talks about this as an “act as groundbreaking as the research itself” which seems a bit of an exaggeration. It is important that data is being reviewed and published, but it has been happening for a while in other fields. Nonetheless, this is a welcome initiative, especially if it gets attention like the LARB article. In 2013 the Tri-Council (of research agencies in Canada) called for a culture of research data stewardship. In 2015 I worked with Sonja Sapach and Catherine Middleton on a report on a Data Management Plan Recommendation for Social Science and Humanities Funding Agencies. This looks more at the front end of requiring plans from people submitting grant proposals that are asking for funding for data-driven projects, but this was so that data could be made available for future research.
Sinykin’s essay looks at the poetry publishing culture in the US and how white it is. He shows how data can be used to study inequalities. We also need to ask about the privilege of English poetry and that of culture from the Global North. Not to mention research and research infrastructure.
Advances in AI and humanoid robotics have brought us to the threshold of a new kind of capability: creating lifelike digital renditions of the deceased.
Wired Magazine has a nice article about Why scientists are building AI avatars of the dead. The article talks about digital twin technology designed to create an avatar of a particular person that could serve as a family companion. You could have your grandfather modelled so that you could talk to him and hear his stories after he has passed.
The article also talks about the importance of the body and ideas about modelling personas with bodies. Imagine wearing motion trackers and other sensors so that your bodily presence could be modelled. Then imagine your digital twin being instantiated in a robot.
Needless to say we aren’t anywhere close yet. See this spoof video of the robot Sophia on a date with Will Smith. There are nonetheless issues about the legalities and ethics of creating bots based on people. What if one didn’t have permission from the original? Is it ethical to create a bot modelled on a historical person? a living person?
We routinely animate other people in novels, dialogue (of the dead), and in conversation. Is impersonating someone so wrong? Should people be able to control their name and likeness under all circumstances?
Then there are the possibilities for the manipulation of a digital twin or through such a twin.
As for the issue of data breaches, digital resurrection opens up a whole new can of worms. “You may share all of your feelings, your intimate details,” Hickok says. “So there’s the prospect of malicious intent—if I had access to your bot and was able to talk to you through it, I could change your attitude about things or nudge you toward certain actions, say things your loved one never would have said.”
My drawings are a reflection of my soul. What happens when artificial intelligence — and anyone with access to it — can replicate them?
Webcomic artist Sarah Andersen has written a timely Opinion for the New York Times on how The Alt-Right Manipulated My Comic. Then A.I. Claimed It.She talks about being harassed by the Alt-Right who created a shadow version of her work full of violent, racist and nazi motifs. Now she could be haunted by an AI-generated shadow like the image above. Her essay nicely captures the feeling of helplessness that many artists who survive on their work must be feeling before the “research” trick of LAION, the nonprofit arm of Stability AI that scraped copyrighted material under the cover of academic research and then made available for commercialization as Stable Diffusion.
Andersen links to a useful article on AI Data Laundering which is a good term for what researchers seem to be doing intentionally or not. What is the solution? Datasets gathered with consent? Alas too many of us, including myself, have released images on Flickr and other sites. So, as the article author Andy Baio puts it, “Asking for permission slows technological progress, but it’s hard to take back something you’ve unconditionally released into the world.”
While artists like Andersen may have no legal recourse that doesn’t make it ethical. Perhaps the academics that are doing the laundering should be called out. Perhaps we should consider boycotting such tools and hiring live artists when we have graphic design work.
Computer and video games featuring Nazi symbols such as the swastika can now be sold in Germany uncensored after a regulatory body lifted the longstanding ban.
CNN reported back in 2018 that Germany lifts ban on Nazi symbols in computer games. The game that prompted this was the counterfactual Wolfenstein II: The New Colossus that imagines the Nazis won WW II. To sell the game in Germany they had to change the symbols like the swastika as it is forbidden to display such symbols of “unconstitutional organizations.” Anyway, Germany has changed its interpretation of the law so that games are now treated as works of art like movies where it is legal to show the symbols.
This shows how difficult it can be to ban hate speech while not censoring the arts. For that matter, how does one deal with ironic hate speech – hate speech masquerading as irony?