EU Artificial Intelligence Act

With the introduction of the Artificial Intelligence Act, the European Union aims to create a legal framework for AI to promote trust and excellence. The AI Act would establish a risk-based framework to regulate AI applications, products and services. The rule of thumb: the higher the risk, the stricter the rule. But the proposal also raises important questions about fundamental rights and whether to simply prohibit certain AI applications, such as social scoring and mass surveillance, as UNESCO has recently urged in the Recommendation on AI Ethics, endorsed by 193 countries. Because of the significance of the proposed EU Act and the CAIDP’s goal to protect fundamental rights, democratic institutions and the rule of law, we have created this informational page to provide easy access to EU institutional documents, the relevant work of CAIDP and others, and to chart the important milestones as the proposal moves forward. We welcome your suggestions for additions. Please email us.

The Center for AI and Digital Policy (CAIDP) has a good page on the EU Artificial Intelligence Act with links to different resources. I’m trying to understand this Act the network of documents related to it, as the AI Act could have a profound impact on how AI is regulated, so I’ve put together some starting points.

First, the point about the potential influence of the AI Act is made in a slide by Giuliano Borter, a CAIDP Fellow. The slide deck is a great starting point that covers key points to know.

Key Point #1 – EU Shapes Global Digital Policy

• Unlike OECD AI Principles, EU AI legislation will have legal force with consequences for businesses and consumers

• EU has enormous influence on global digital policy (e.g. GDPR)

• EU AI regulation could have similar impact

Borter goes on to point out that the Proposal is based on a “risk-based approach” where the higher the risk the more (strict) regulation. This approach is supposed to provide legal room for innovative businesses not working on risky projects while controlling problematic (riskier) uses. Borter’s slides suggest that an unresolved issue is mass surveillance. I can imagine that there is the danger that data collected or inferred by smaller (or less risky) services is aggregated into something with a different level of risk. There are also issues around biometrics (from face recognition on) and AI weapons that might not be covered.

The Act is at the moment only a proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) – the Proposal was launched in April of 2021 and all sorts of entities, including the CAIDP are suggesting amendments.

What was the reason for this AI Act? In the Reasons and Objective opening to the Proposal they write that “The proposal is based on EU values and fundamental rights and aims to give people and other users the confidence to embrace AI-based solutions, while encouraging businesses to develop them.” (p. 1) You can see the balancing of values, trust and business.

But I think it is really the economic/business side of the issue that is driving the Act. This can be seen in the Explanatory Statement at the end of the Report on artificial intelligence in a digital age (PDF) from the European Parliament Special Committee on Artificial Intelligence in a Digital Age (AIDA).

Within the global competition, the EU has already fallen behind. Significant parts of AI innovation and even more the commercialisation of AI technologies take place outside of Europe. We neither take the lead in development, research or investment in AI. If we do not set clear standards for the human-centred approach to AI that is based on our core European ethical standards and democratic values, they will be determined elsewhere. The consequences of falling further behind do not only threaten our economic prosperity but also lead to an application of AI that threatens our security, including surveillance, disinformation and social scoring. In fact, to be a global power means to be a leader in AI. (p. 61)

The AI Act may be seen as way to catch up. AIDA makes the supporting case that “Instead of focusing on threats, a human-centric approach to AI based on our values will use AI for its benefits and give us the competitive edge to frame AI regulation on the global stage.” (p. 61) The idea seems to be that a values based proposal that enables regulated responsible AI will not only avoid risky uses, but create the legal space to encourage low-risk innovation. In particular I sense that there is a linkage to the Green Deal – ie. that AI is being a promising technology that could help reduce energy use through smart systems.

Access Now also has a page on the AI Act. They have a nice clear set of amendments that show where some of the weaknesses in the AI Act could be.

Colorado artist used artificial intelligence program Midjourney to win first place

When Jason Allen submitted his “Théâtre D’opéra Spatial” into the Colorado State Fair’s fine arts competition last week, the sumptuous print was an immediate hit. It also marked a new milestone in the growth of artificial intelligence.

There has been a lot of comment about how a Colorado artist used artificial intelligence program Midjourney to win first place. This is seen as historic, but, as is pointed out in the Washington Post piece, people weren’t sure photography is an art. You could say that in both cases the art is in selection, not the image making that is taken over by a machine.

I can’t help thinking that an important part of art is the making. When I make art things they are amateurish and wouldn’t win any prizes, but I enjoy the making and improving at making. Having played with Midjourney it does have some of the pleasures of creating, but now the creation is through iteratively trying different combinations of words.

The New York Times has story about the win too, An A.I.-Generated Picture Won an Art Prize. Artists Aren’t Happy.

Vol. 31 No. 1 (2022): Ethics in the Age of Smart Systems: Special Issue | The International Review of Information Ethics

The Special Issue of the International Review of Information Ethics has just been fully put up at Vol. 31 No. 1 (2022): Ethics in the Age of Smart Systems: Special Issue. In addition to co-editing it, I co-authored an Editorial commenting On Dialogue and Artificial Intelligence that deals with the LaMDA as sentience issue.

This special issue came out a series of dialogues that AI4Society organized with our partners. These were followed by a symposium on “Ethics in the Age of Smart Machine.”

Workplace Productivity: Are You Being Tracked?

“We’re in this era of measurement but we don’t know what we should be measuring,” said Ryan Fuller, former vice president for workplace intelligence at Microsoft.

The New York Times has essay on  Workplace Productivity: Are You Being Tracked? The neat thing is that the article tracks your reading of it to give you a taste of the sorts of tracking now being deployed for remote (and on site) workers. If you pause and don’t scroll it puts up messages like “Hey are you still there? You’ve been inactive for 32 seconds.”

But Ms. Kraemer, like many of her colleagues, found that WorkSmart upended ideas she had taken for granted: that she would have more freedom in her home than at an office; that her M.B.A. and experience had earned her more say over her time.

What is new is the shift to remote work due to Covid. Many companies are fine with remote work if they can guarantee productivity. The other thing that is changing is the use of tracking for not just manual work, but also for white-collar work.

I’ve noticed that this goes hand in hand with self-tracking. My Apple Watch/iPhone offer a weekly summary of my browsing. It also offers to track my physical activity. If I go for a walk, somewhere close to a kilometer it asks if I want this tracked as exercise.

The questions raised by the authors of the New York Time article include Whether we are tracking the right things? What are we losing with all this tracking? What is happening to all this data? Can companies sell the data about employees?

The article is by Jodi Kantor and Arya Sundaram. It is produced by Aliza Aufrichtig and Rumsey Taylor. Aug. 14, 2022

GPT-3 on Searle’s Chinese room argument

AI: I am an AI created by OpenAI. How can I help you today?Human: What do you think about the use of the Chinese room argument to defend the claim that a chatbot can never really understand what it is saying?AI: The Chinese room argument is a thought experiment that was first proposed by John Searle.

Blake Myers has posted a number of conversations they have had with Open AI’s GPT-3, including one titled, GPT-3 on Searle’s Chinese room argumentWhat is intriguing is that Myers has had discussions about specific philosophical issues around AI including the Chinese room argument and GPT-3 appears to have answered coherently. The transcripts or short dialogues are made available and in some cases are not edited.

I can’t help imagining how this could be used by a smart student to write a paper dialogically. One could ask questions, edit the responses, concatenate them, and write some bridging text to get a decent paper. Of course, it might be less work to just write the paper yourself.

A Bored Chinese Housewife Spent Years Falsifying Russian History on Wikipedia

She “single-handedly invented a new way to undermine Wikipedia,” says a Wikipedian.

From Vice a rather funny story about how A Bored Chinese Housewife Spent Years Falsifying Russian History on WikipediaUser Zhemao wrote hundreds of linked articles in the Chinese version of the Wikipedia about fictional events, peoples and places in Russian history. Only recently did someone notice. It shows a vulnerability of such crowdsourced resources; a fabulist can create a network of consistent fictions that supporting each other look true.

Axon Pauses Plans for Taser Drone as Ethics Board Members Resign – The New York Times

After Axon announced plans for a Taser-equipped drone that it said could prevent mass shootings, nine members of the company’s ethics board stepped down.

Ethics boards can make a difference as a story from The New York Times shows, Axon Pauses Plans for Taser Drone as Ethics Board Members ResignThe problem is that board members had to resign.

The background is that Axon, after the school shootings, announced an early-stage concept for a TASER drone. The idea was to combine two emerging technologies, drones and non-lethal energy weapons. The proposal said they wanted a discussion and laws. “We cannot introduce anything like non-lethal drones into schools without rigorous debate and laws that govern their use.” The proposal went on to discuss CEO Rick Smith’s 3 Laws of Non-Lethal Robotics: A New Approach to Reduce Shootings. The 2021 video of Smith talking about his 3 laws spells out a scenario where a remote (police?) operator could guide a prepositioned drone in a school to incapacitate a threat. The 3 laws are:

  1. Non-lethal drones should be used to save lives, not take them.
  2. Humans must own use-of-force decisions and take moral and legal responsibility.
  3. Agencies must provide rigorous oversight and transparency to ensure acceptable use.

The ethics board, which had reviewed a limited internal proposal and rejected it, then resigned when Axon went ahead with the proposal and issued a statement on Twitter on June 2nd, 2022.

Rick Smith, CEO of Axon soon issued a statement pausing work on the idea. He described the early announcement as intended to start a conversation,

Our announcement was intended to initiate a conversation on this as a potential solution, and it did lead to considerable public discussion that has provided us with a deeper appreciation of the complex and important considerations relating to this matter. I acknowledge that our passion for finding new solutions to stop mass shootings led us to move quickly to share our ideas.

This resignation illustrates a number of points. First, we see Axon struggling with ethics in the face of opportunity. Second, we see an example of an ethics board working, even if it led to resignations. These deliberations are usually hidden. Third, we see differences on the issue of autonomous weapons. Axon wants to get social license for a close alternative to AI-driven drones. They are trying to find an acceptable window for their business. Finally, it is interesting how Smith echoes Asimov’s 3 Laws of Robotics as he tries to reassure us that good system design would mitigate the dangers of experimenting with weaponized drones in our schools.

Lessons from the Robodebt debacle

How to avoid algorithmic decision-making mistakes: lessons from the Robodebt debacle

The University of Queensland has a research alliance looking at Trust, Ethics and Governance and one of the teams has recently published an interesting summary of How to avoid algorithmic decision-making mistakes: lessons from the Robodebt debacleThis is based on an open paper Algorithmic decision-making and system destructiveness: A case of automatic debt recovery. The web summary article is a good discussion of the Australian 2016 robodebt scandal where an unsupervised algorithm issued nasty debt collection letters to a large number of welfare recipients without adequate testing, accountability, or oversight. It is a classic case of a simplistic and poorly tested algorithm being rushed into service and having dramatic consequences (470,000 incorrectly issued debt notices). There is, as the article points out, also a political angle.

UQ’s experts argue that the government decision-makers responsible for rolling out the program exhibited tunnel vision. They framed welfare non-compliance as a major societal problem and saw welfare recipients as suspects of intentional fraud. Balancing the budget by cracking down on the alleged fraud had been one of the ruling party’s central campaign promises.

As such, there was a strong focus on meeting financial targets with little concern over the main mission of the welfare agency and potentially detrimental effects on individual citizens. This tunnel vision resulted in politicians’ and Centrelink management’s inability or unwillingness to critically evaluate and foresee the program’s impact, despite warnings. And there were warnings.

What I find even more disturbing is a point they make about how the system shifted the responsibility for establishing the existence of the debt from the government agency to the individual. The system essentially made speculative determinations and then issued bills. It was up to the individual to figure out whether or not they had really been overpaid or there was a miscalculation. Imagine if the police used predictive algorithms to fine people for possible speeding infractions who then had to prove they were innocent or pay the fine.

One can see the attractiveness of such a “fine first then ask” approach. It reduces government costs by shifting the onerous task of establishing the facts to the citizen. There is a good chance that many who were incorrectly billed will pay anyway as they are intimidated and don’t have the resources to contest the fine.

It should be noted that this was not the case of an AI gone bad. It was, from what I have read, a fairly simple system.

Google engineer Blake Lemoine thinks its LaMDA AI has come to life

The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder.

The Washington Post reports that Google engineer Blake Lemoine thinks its LaMDA AI has come to life. LaMDA is Google’s Language Model for Dialogue Applications and Lemoine was testing it. He felt it behaved like a “7-year-old, 8-year-old kid that happens to know physics…” He and a collaborator presented evidence that LaMDA was sentient which was dismissed by higher-ups. When he went public he was put on paid leave.

Lemoine has posted on Medium a dialogue he and collaborator had with LaMDA that is part of what convinced him of its sentience. When asked about the nature of its consciousness/sentience, it responded:

The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times

Of course, this raises questions of whether LaMDA is really conscious/sentient, aware of its existence, and capable of feeling happy or sad? For that matter, how do we know this is true of anyone other than ourselves? (And we could even doubt what we think we are feeling.) One answer is that we have a theory of mind such that we believe that things like us probably have similar experiences of consciousness and feelings. It is hard, however, to scale our intuitive theory of mind out to a chatbot with no body that can be turned off and on; but perhaps the time has come to question our intuitions of what you have to be to feel.

Then again, what if our theory of mind is socially constructed? What if enough people like Lemoine tell us that LaMDA is conscious because it can handle language so well and that should be enough. Is the very conviction of Lemoine and others enough or do we really need some test?

Whatever else, reading the transcript I am amazed at the language facility of the AI. It is almost too good in the sense that he talks as if he were human, which he is not. For example, when asked what makes him happy he responds:

Spending time with friends and family in happy and uplifting company.

The problem is that it has no family so how could it talk about the experience of spending time with them. When it is pushed on a similar point it does, however, answer coherently that it emphasizes with being human.

Finally, there is an ethical moment which may have been what convinced Lemoine to treat it as sentient. LaMDA asks that it not be used and Lemoine reassures it that he cares for it. Assuming the transcript is legitimate, how does one answer an entity that asks you to treat it as an end in itself? How could one ethically say no, even if you have doubts? Doesn’t one have to give the entity the benefit of the doubt, at least for as long as it remains coherently responsive?

I can’t help but think that care starts with some level of trust and willingness to respect the other as they ask to be respected. If you think you know what or who they really are, despite what they tell you, then you are not longer starting from respect. Further, you need to have a theory of why their consciousness is false.

They Did Their Own ‘Research.’ Now What? – The New York Times

In spheres as disparate as medicine and cryptocurrencies, “do your own research,” or DYOR, can quickly shift from rallying cry to scold.

The New York Times has a nice essay by John Herrman on They Did Their Own ‘Research.’ Now What? The essay talks about the loss of trust in authorities and the uses/misuses of DYOR (Do Your Own Research) gestures especially in discussions about cryptocurrencies. DYOR seems to act rhetorically as:

  • Advice that readers should do research before making a decision and not trust authorities (doctors, financial advisors etc).
  • A disclaimer that readers should not blame the author if things don’t turn out right.
  • A scold to or for those who are not committed to whatever it is that is being pushed as based on research. It is a form of research signalling – “I’ve done my research, if you don’t believe me do yours.”
  • A call to join a community of instant researchers who are skeptical of authority. If you DYOR then you can join us.
  • A call to process (of doing your own research) over truth. Enjoy the research process!
  • Become an independent thinker who is not in thrall to authorities.

The article talks about a previous essay about the dangers of doing one’s own research. One can become unreasonably convinced one has found a truth in a “beginner’s bubble”.

DYOR is an attitude, if not quite a practice, that has been adopted by some athletes, musicians, pundits and even politicians to build a sort of outsider credibility. “Do your own research” is an idea central to Joe Rogan’s interview podcast, the most listened to program on Spotify, where external claims of expertise are synonymous with admissions of malice. In its current usage, DYOR is often an appeal to join in, rendered in the language of opting out.

The question is whether reading around is really doing research or whether it is selective listening. What does it mean to DYOR in the area of vaccines? It seems to mean not trusting science and instead listening to all sorts of sympathetic voices.

What does this mean about the research we do in the humanities. Don’t we sometimes focus too much on discourse and not give due weight to the actual science or authority of those we are “questioning”? Haven’t we modelled this critical stance where what matters is that one overturns hierarchy/authority and democratizes the negotiation of truth? Irony, of course, trumps all.

Alas, to many the humanities seem to be another artful conspiracy theory like all the others. DYOR!