How Deepfake Videos Are Used to Spread Disinformation – The New York Times

For the first time, A.I.-generated personas, often used for corporate trainings, were detected in a state-aligned information campaign — opening a new chapter in online manipulation.

The New York Times has a story about how How Deepfake Videos Are Used to Spread Disinformation. The videos are actually from a service Synthesia which allows you to generate videos of talking heads from transcripts that you prepare. They have different professionally acted avatars and their technology will then generate the video of your text being presented. This is supposed to be used for quickly generating training videos (without paying actors), but someone used it for disinformation.

ChatGPT listed as author on research papers: many scientists disapprove

The editors-in-chief of Nature and Science told Nature’s news team that ChatGPT doesn’t meet the standard for authorship. “An attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs,” says Magdalena Skipper, editor-in-chief of Nature in London. Authors using LLMs in any way while developing a paper should document their use in the methods or acknowledgements sections, if appropriate, she says.

We are beginning to see interesting ethical issues crop up regarding the new LLMs (Large Language Models) like ChatGPT. For example, Nature has an news article, ChatGPT listed as author on research papers: many scientists disapprove,.

It makes sense to document use, but why would we document use of ChatGPT and not, for example, use of a library or of a research tool like Google Scholar? What about the use of ChatGPT demands that it be acknowledged?

Fuck the Poetry Police: On the Index of Major Literary Prizes in the United States

The LARB has a nice essay by Dan Sinykin on how researchers have used data to track how poetry prizes are distributed unequally titled, Fuck the Poetry Police: On the Index of Major Literary Prizes in the United States. The essay talks about the creation of the Post45 Data Collective which provides peer review for post-1945 cultural datasets.

Sinykin talks about this as an “act as groundbreaking as the research itself” which seems a bit of an exaggeration. It is important that data is being reviewed and published, but it has been happening for a while in other fields. Nonetheless, this is a welcome initiative, especially if it gets attention like the LARB article. In 2013 the Tri-Council (of research agencies in Canada) called for a culture of research data stewardship. In 2015 I worked with Sonja Sapach and Catherine Middleton on a report on a Data Management Plan Recommendation for Social Science and Humanities Funding Agencies. This looks more at the front end of requiring plans from people submitting grant proposals that are asking for funding for data-driven projects, but this was so that data could be made available for future research.

Sinykin’s essay looks at the poetry publishing culture in the US and how white it is. He shows how data can be used to study inequalities. We also need to ask about the privilege of English poetry and that of culture from the Global North. Not to mention research and research infrastructure.

Why scientists are building AI avatars of the dead | WIRED Middle East

Advances in AI and humanoid robotics have brought us to the threshold of a new kind of capability: creating lifelike digital renditions of the deceased.

Wired Magazine has a nice article about Why scientists are building AI avatars of the dead. The article talks about digital twin technology designed to create an avatar of a particular person that could serve as a family companion. You could have your grandfather modelled so that you could talk to him and hear his stories after he has passed.

The article also talks about the importance of the body and ideas about modelling personas with bodies. Imagine wearing motion trackers and other sensors so that your bodily presence could be modelled. Then imagine your digital twin being instantiated in a robot.

Needless to say we aren’t anywhere close yet. See this spoof video of the robot Sophia on a date with Will Smith. There are nonetheless issues about the legalities and ethics of creating bots based on people. What if one didn’t have permission from the original? Is it ethical to create a bot modelled on a historical person? a living person?

We routinely animate other people in novels, dialogue (of the dead), and in conversation. Is impersonating someone so wrong? Should people be able to control their name and likeness under all circumstances?

Then there are the possibilities for the manipulation of a digital twin or through such a twin.

As for the issue of data breaches, digital resurrection opens up a whole new can of worms. “You may share all of your feelings, your intimate details,” Hickok says. “So there’s the prospect of malicious intent—if I had access to your bot and was able to talk to you through it, I could change your attitude about things or nudge you toward certain actions, say things your loved one never would have said.”

 

The Alt-Right Manipulated My Comic. Then A.I. Claimed It. 

AI generated comic in style of Sarah Andersen

My drawings are a reflection of my soul. What happens when artificial intelligence — and anyone with access to it — can replicate them?

Webcomic artist Sarah Andersen has written a timely Opinion for the New York Times on how  The Alt-Right Manipulated My Comic. Then A.I. Claimed It. She talks about being harassed by the Alt-Right who created a shadow version of her work full of violent, racist and nazi motifs. Now she could be haunted by an AI-generated shadow like the image above. Her essay nicely captures the feeling of helplessness that many artists who survive on their work must be feeling before the “research” trick of LAION, the nonprofit arm of Stability AI that scraped copyrighted material under the cover of academic research and then made available for commercialization as Stable Diffusion.

Andersen links to a useful article on AI Data Laundering which is a good term for what researchers seem to be doing intentionally or not. What is the solution? Datasets gathered with consent? Alas too many of us, including myself, have released images on Flickr and other sites. So, as the article author Andy Baio puts it, “Asking for permission slows technological progress, but it’s hard to take back something you’ve unconditionally released into the world.”

While artists like Andersen may have no legal recourse that doesn’t make it ethical. Perhaps the academics that are doing the laundering should be called out. Perhaps we should consider boycotting such tools and hiring live artists when we have graphic design work.

Germany lifts ban on Nazi symbols in computer games | CNN

Computer and video games featuring Nazi symbols such as the swastika can now be sold in Germany uncensored after a regulatory body lifted the longstanding ban.

CNN reported back in 2018 that Germany lifts ban on Nazi symbols in computer games. The game that prompted this was the counterfactual Wolfenstein II: The New Colossus that imagines the Nazis won WW II. To sell the game in Germany they had to change the symbols like the swastika as it is forbidden to display such symbols of “unconstitutional organizations.” Anyway, Germany has changed its interpretation of the law so that games are now treated as works of art like movies where it is legal to show the symbols.

This shows how difficult it can be to ban hate speech while not censoring the arts. For that matter, how does one deal with ironic hate speech – hate speech masquerading as irony?

Unitron Mac 512: A Contraband Mac 512K from Brazil

From a paper on postcolonial computing I learned about the Unitron Mac 512: A Contraband Mac 512K from Brazil. For a while Brazil didn’t allow the importation of computers (so as to kickstart their own computer industry.) Unitron decided to reverse engineer the Mac 512K, but Apple put pressure on Brazil and the project was closed down. At least 500 machines were built and I guess some are still in circulation.

The article is Philip, K., et al. (2010). “Postcolonial Computing: A Tactical Survey.” Science Technology Human Values. 37(1).

Though Apple had no intellectual property protection for the Macintosh in Brazil, the American corporation was able to pressure government and other economic actors within Brazil to reframe Unitron’s activities, once seen as nationalist and anti-colonial, as immoral piracy.

Character.AI: Dialogue on AI Ethics

Part of image generated from text, “cartoon pencil drawing of ethics professor and student talking” by Midjourney, Oct. 5, 2022.

Last week I created a character on Character.AI, a new artificial tool created by some ex-Google engineers who worked on LaMDA, the language model from Google that I blogged about before.

Character.AI, which is now down for maintenance due to all the users, lets you quickly create a character and then enter into dialogue with it. It actually works quite well. I created “The Ethics Professor” and then wrote a script of questions that I used to engage the AI character. The dialogue is below.

From Bitcoin to Stablecoin: Crypto’s history is a house of cards

The wild beginnings, crazy turns, colorful characters and multiple comebacks of the crypto world

The Washington Post has a nice illustrated history of crypto, From Bitcoin to Stablecoin: Crypto’s history is a house of cards. They use a deck of cards as a visual metaphor and a graph of the ups and downs of crypto. I can’t help thinking that crypto is going to go up again, but when and in what form?

For that matter, where is Ruja Ignatova?

Issues around AI text to art generators

A new art-generating AI system called Stable Diffusion can create convincing deepfakes, including of celebrities.

TechCrunch has a nice discussion of Deepfakes for all: Uncensored AI art model prompts ethics questions. The relatively sudden availability of AI text to art generators has provoked discussion on the ethics of creation and of large machine learning models. Here are some interesting links:

It is worth identifying some of the potential issues:

  • These art generating AIs may have violated copyright in scraping millions of images. Could artists whose work has been exploited sue for compensation?
  • The AIs are black boxes that are hard to query. You can’t tell if copyrighted images were used.
  • These AIs could change the economics of illustration. People who used to commission and pay for custom art for things like magazines, book covers, and posters, could start just using these AIs to save money. Just as Flickr changed the economics of photography, MidJourney could put commercial illustrators out of work.
  • We could see a lot more “original” art in situations where before people could not afford it. Perhaps poster stores could offer to generate a custom image for you and print it. Get your portrait done as a cyberpunk astronaut.
  • The AIs could reinforce visual bias in our visual literacy. Systems that always see Philosophers as old white guys with beards could limit our imagination of what could be.
  • These could be used to create pornographic deepfakes with people’s faces on them or other toxic imagery.