A new art-generating AI system called Stable Diffusion can create convincing deepfakes, including of celebrities.
TechCrunch has a nice discussion of Deepfakes for all: Uncensored AI art model prompts ethics questions. The relatively sudden availability of AI text to art generators has provoked discussion on the ethics of creation and of large machine learning models. Here are some interesting links:
- Ars Technica has a article on how Artists begin selling AI-generated artwork on stock photography websites. I note that MidJourney generated images all seem to have a similar style. We may find it becomes more and more identifiable like some smell in the background.
- Ars Technica has another article on various projects to be able to see what original images might have been used in training AIs like MidJourney. Have AI image generators assimilated your art? New tool lets you check. The provenance of some of the training sets is documented here. It remains to be seen what you can do if your images have been used.
- And of course there are art groups that are banning AI generated art, Flooded with AI-generated images, some art communities ban them completely. This raises the question of whether one can tell?
It is worth identifying some of the potential issues:
- These art generating AIs may have violated copyright in scraping millions of images. Could artists whose work has been exploited sue for compensation?
- The AIs are black boxes that are hard to query. You can’t tell if copyrighted images were used.
- These AIs could change the economics of illustration. People who used to commission and pay for custom art for things like magazines, book covers, and posters, could start just using these AIs to save money. Just as Flickr changed the economics of photography, MidJourney could put commercial illustrators out of work.
- We could see a lot more “original” art in situations where before people could not afford it. Perhaps poster stores could offer to generate a custom image for you and print it. Get your portrait done as a cyberpunk astronaut.
- The AIs could reinforce visual bias in our visual literacy. Systems that always see Philosophers as old white guys with beards could limit our imagination of what could be.
- These could be used to create pornographic deepfakes with people’s faces on them or other toxic imagery.