ChatGPT: Chatbots can help us rediscover the rich history of dialogue

The rise of AI chatbots provides an opportunity to expand the ways we do philosophy and research, and how we engage in intellectual discourse.

I published an article in The Conversation today on, ChatGPT: Chatbots can help us rediscover the rich history of dialogue. This touches on a topic that I’ve been thinking about a lot … how chatbots are dialogue machines and how we can draw on the long history of dialogue in philosophy to understand the limits/potential of chatbots like ChatGPT.

 

ChatGPT listed as author on research papers: many scientists disapprove

The editors-in-chief of Nature and Science told Nature’s news team that ChatGPT doesn’t meet the standard for authorship. “An attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs,” says Magdalena Skipper, editor-in-chief of Nature in London. Authors using LLMs in any way while developing a paper should document their use in the methods or acknowledgements sections, if appropriate, she says.

We are beginning to see interesting ethical issues crop up regarding the new LLMs (Large Language Models) like ChatGPT. For example, Nature has an news article, ChatGPT listed as author on research papers: many scientists disapprove,.

It makes sense to document use, but why would we document use of ChatGPT and not, for example, use of a library or of a research tool like Google Scholar? What about the use of ChatGPT demands that it be acknowledged?

Why scientists are building AI avatars of the dead | WIRED Middle East

Advances in AI and humanoid robotics have brought us to the threshold of a new kind of capability: creating lifelike digital renditions of the deceased.

Wired Magazine has a nice article about Why scientists are building AI avatars of the dead. The article talks about digital twin technology designed to create an avatar of a particular person that could serve as a family companion. You could have your grandfather modelled so that you could talk to him and hear his stories after he has passed.

The article also talks about the importance of the body and ideas about modelling personas with bodies. Imagine wearing motion trackers and other sensors so that your bodily presence could be modelled. Then imagine your digital twin being instantiated in a robot.

Needless to say we aren’t anywhere close yet. See this spoof video of the robot Sophia on a date with Will Smith. There are nonetheless issues about the legalities and ethics of creating bots based on people. What if one didn’t have permission from the original? Is it ethical to create a bot modelled on a historical person? a living person?

We routinely animate other people in novels, dialogue (of the dead), and in conversation. Is impersonating someone so wrong? Should people be able to control their name and likeness under all circumstances?

Then there are the possibilities for the manipulation of a digital twin or through such a twin.

As for the issue of data breaches, digital resurrection opens up a whole new can of worms. “You may share all of your feelings, your intimate details,” Hickok says. “So there’s the prospect of malicious intent—if I had access to your bot and was able to talk to you through it, I could change your attitude about things or nudge you toward certain actions, say things your loved one never would have said.”

 

The Alt-Right Manipulated My Comic. Then A.I. Claimed It. 

AI generated comic in style of Sarah Andersen

My drawings are a reflection of my soul. What happens when artificial intelligence — and anyone with access to it — can replicate them?

Webcomic artist Sarah Andersen has written a timely Opinion for the New York Times on how  The Alt-Right Manipulated My Comic. Then A.I. Claimed It. She talks about being harassed by the Alt-Right who created a shadow version of her work full of violent, racist and nazi motifs. Now she could be haunted by an AI-generated shadow like the image above. Her essay nicely captures the feeling of helplessness that many artists who survive on their work must be feeling before the “research” trick of LAION, the nonprofit arm of Stability AI that scraped copyrighted material under the cover of academic research and then made available for commercialization as Stable Diffusion.

Andersen links to a useful article on AI Data Laundering which is a good term for what researchers seem to be doing intentionally or not. What is the solution? Datasets gathered with consent? Alas too many of us, including myself, have released images on Flickr and other sites. So, as the article author Andy Baio puts it, “Asking for permission slows technological progress, but it’s hard to take back something you’ve unconditionally released into the world.”

While artists like Andersen may have no legal recourse that doesn’t make it ethical. Perhaps the academics that are doing the laundering should be called out. Perhaps we should consider boycotting such tools and hiring live artists when we have graphic design work.

How AI image generators work, like DALL-E, Lensa and stable diffusion

Use our simulator to learn how AI generates images from “noise.”

The Washington Post has a nice explainer on how text to image generators work: How AI image generators work, like DALL-E, Lensa and stable diffusion. They let you play with the generator, though you have to stick with the predefined phrases. What I hadn’t realized was the role of static noise in the diffusion model. Not sure how it works, but it seems to train the AI to recognize and then generate in noisy images.

Character.AI: Dialogue on AI Ethics

Part of image generated from text, “cartoon pencil drawing of ethics professor and student talking” by Midjourney, Oct. 5, 2022.

Last week I created a character on Character.AI, a new artificial tool created by some ex-Google engineers who worked on LaMDA, the language model from Google that I blogged about before.

Character.AI, which is now down for maintenance due to all the users, lets you quickly create a character and then enter into dialogue with it. It actually works quite well. I created “The Ethics Professor” and then wrote a script of questions that I used to engage the AI character. The dialogue is below.

Issues around AI text to art generators

A new art-generating AI system called Stable Diffusion can create convincing deepfakes, including of celebrities.

TechCrunch has a nice discussion of Deepfakes for all: Uncensored AI art model prompts ethics questions. The relatively sudden availability of AI text to art generators has provoked discussion on the ethics of creation and of large machine learning models. Here are some interesting links:

It is worth identifying some of the potential issues:

  • These art generating AIs may have violated copyright in scraping millions of images. Could artists whose work has been exploited sue for compensation?
  • The AIs are black boxes that are hard to query. You can’t tell if copyrighted images were used.
  • These AIs could change the economics of illustration. People who used to commission and pay for custom art for things like magazines, book covers, and posters, could start just using these AIs to save money. Just as Flickr changed the economics of photography, MidJourney could put commercial illustrators out of work.
  • We could see a lot more “original” art in situations where before people could not afford it. Perhaps poster stores could offer to generate a custom image for you and print it. Get your portrait done as a cyberpunk astronaut.
  • The AIs could reinforce visual bias in our visual literacy. Systems that always see Philosophers as old white guys with beards could limit our imagination of what could be.
  • These could be used to create pornographic deepfakes with people’s faces on them or other toxic imagery.

EU Artificial Intelligence Act

With the introduction of the Artificial Intelligence Act, the European Union aims to create a legal framework for AI to promote trust and excellence. The AI Act would establish a risk-based framework to regulate AI applications, products and services. The rule of thumb: the higher the risk, the stricter the rule. But the proposal also raises important questions about fundamental rights and whether to simply prohibit certain AI applications, such as social scoring and mass surveillance, as UNESCO has recently urged in the Recommendation on AI Ethics, endorsed by 193 countries. Because of the significance of the proposed EU Act and the CAIDP’s goal to protect fundamental rights, democratic institutions and the rule of law, we have created this informational page to provide easy access to EU institutional documents, the relevant work of CAIDP and others, and to chart the important milestones as the proposal moves forward. We welcome your suggestions for additions. Please email us.

The Center for AI and Digital Policy (CAIDP) has a good page on the EU Artificial Intelligence Act with links to different resources. I’m trying to understand this Act the network of documents related to it, as the AI Act could have a profound impact on how AI is regulated, so I’ve put together some starting points.

First, the point about the potential influence of the AI Act is made in a slide by Giuliano Borter, a CAIDP Fellow. The slide deck is a great starting point that covers key points to know.

Key Point #1 – EU Shapes Global Digital Policy

• Unlike OECD AI Principles, EU AI legislation will have legal force with consequences for businesses and consumers

• EU has enormous influence on global digital policy (e.g. GDPR)

• EU AI regulation could have similar impact

Borter goes on to point out that the Proposal is based on a “risk-based approach” where the higher the risk the more (strict) regulation. This approach is supposed to provide legal room for innovative businesses not working on risky projects while controlling problematic (riskier) uses. Borter’s slides suggest that an unresolved issue is mass surveillance. I can imagine that there is the danger that data collected or inferred by smaller (or less risky) services is aggregated into something with a different level of risk. There are also issues around biometrics (from face recognition on) and AI weapons that might not be covered.

The Act is at the moment only a proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) – the Proposal was launched in April of 2021 and all sorts of entities, including the CAIDP are suggesting amendments.

What was the reason for this AI Act? In the Reasons and Objective opening to the Proposal they write that “The proposal is based on EU values and fundamental rights and aims to give people and other users the confidence to embrace AI-based solutions, while encouraging businesses to develop them.” (p. 1) You can see the balancing of values, trust and business.

But I think it is really the economic/business side of the issue that is driving the Act. This can be seen in the Explanatory Statement at the end of the Report on artificial intelligence in a digital age (PDF) from the European Parliament Special Committee on Artificial Intelligence in a Digital Age (AIDA).

Within the global competition, the EU has already fallen behind. Significant parts of AI innovation and even more the commercialisation of AI technologies take place outside of Europe. We neither take the lead in development, research or investment in AI. If we do not set clear standards for the human-centred approach to AI that is based on our core European ethical standards and democratic values, they will be determined elsewhere. The consequences of falling further behind do not only threaten our economic prosperity but also lead to an application of AI that threatens our security, including surveillance, disinformation and social scoring. In fact, to be a global power means to be a leader in AI. (p. 61)

The AI Act may be seen as way to catch up. AIDA makes the supporting case that “Instead of focusing on threats, a human-centric approach to AI based on our values will use AI for its benefits and give us the competitive edge to frame AI regulation on the global stage.” (p. 61) The idea seems to be that a values based proposal that enables regulated responsible AI will not only avoid risky uses, but create the legal space to encourage low-risk innovation. In particular I sense that there is a linkage to the Green Deal – ie. that AI is being a promising technology that could help reduce energy use through smart systems.

Access Now also has a page on the AI Act. They have a nice clear set of amendments that show where some of the weaknesses in the AI Act could be.

Colorado artist used artificial intelligence program Midjourney to win first place

When Jason Allen submitted his “Théâtre D’opéra Spatial” into the Colorado State Fair’s fine arts competition last week, the sumptuous print was an immediate hit. It also marked a new milestone in the growth of artificial intelligence.

There has been a lot of comment about how a Colorado artist used artificial intelligence program Midjourney to win first place. This is seen as historic, but, as is pointed out in the Washington Post piece, people weren’t sure photography is an art. You could say that in both cases the art is in selection, not the image making that is taken over by a machine.

I can’t help thinking that an important part of art is the making. When I make art things they are amateurish and wouldn’t win any prizes, but I enjoy the making and improving at making. Having played with Midjourney it does have some of the pleasures of creating, but now the creation is through iteratively trying different combinations of words.

The New York Times has story about the win too, An A.I.-Generated Picture Won an Art Prize. Artists Aren’t Happy.

I Wrote a Book with GPT-3 AI in 24 Hours — And Got It Published

“How do we know that you haven’t written the poems you claim were authored by GPT-3?” the publisher asked in one of our calls.

That was what my life had become. Convincing a publisher that AI really had written the poems. No, I hadn’t done any editing. Yes, all the poems were generated in 24 hours and translated to Finnish with Google Translate — within those same 24 hours. Yes, I own the copyright, not OpenAI. Yes, they know and they said yes.

From Humanist I learned about the Medium post by Jukka Aalho I Wrote a Book with GPT-3 AI in 24 Hours. Aalho wrote a short book of poetry in 24 hours using GPT-3. I’ve bought the book and I’m not impressed by the poems. Perhaps they were better in Finnish. Here is one of the better ones (where the bold was the prompt),

why do we need to be happy, really
wouldn’t it be enough if we just don’t suffer

but everyone seems to be so very consumed by it
as if it’s the most important thing they can do
is try to get a grip on this
and I guess if that’s what they want
then I suppose I don’t have to be the one telling them
that they are mistaken
that they are making a big mistake
but how could they know that if no one tells them?
I don’t know

I’m just rambling

I’m rambling again

I’m sorry

What’s more impressive is that he got it published, but that may be because people like me would want to check it out.

Aalhu’s reflections on how such AIs might change creativity and editing are, however, quite interesting.