Moral Resposibility

On Thursday (the 29th of May) I gave a talk on Moral Responsibility and Artificial Intelligence at the Canadian AI 2025 conference in Calgary, Alberta.

I discussed what moral responsibility might be in the context of AI and argued for an ethic of care (and repair) approach to building relationships of responsibility and responsibility practices.

There was a Responsible AI track in the conference that had some great talks by Gideon Christian (U of Calgary in Law) and Julita Vassileva (U Saskatchewan.)

Rewiring the Humanities: Notes from the DH Winter School 2025

The Digital Humanities Winter School, a unique initiative by the Department of Humanities and Social Sciences, Indian Institute of Technology Delhi (IIT Delhi), was held in February 2025. Its primary goal was to bridge the gap for scholars and students from the humanities, social sciences, and other non-STEM disciplines in India, providing them with a hands-on introduction to computational tools and digital scholarship methods. This introduction aimed to foster algorithmic thinking, conceptualize data-centric research projects, encourage collaborative ventures, and instill critical approaches toward algorithms. The DH Winter School, with its promise of a low learning curve, was designed to boost the confidence of participants who came with little or no exposure to digital applications or programming. By addressing the limited opportunities for students of the humanities and social sciences in India to learn these methods, the DH Winter School aimed to impact the academic landscape significantly.

Michael Sinatra drew my attention to this post about the DH Winter School at IIT Delhi that I contributed to. See,  Rewiring the Humanities: Notes from the DH Winter School 2025.

News Media Publishers Run Coordinated Ad Campaign Urging Washington to Protect Content From Big Tech and AI

Today, hundreds of news publishers launched the “Support Responsible AI” ad campaign, which calls on Washington to make Big Tech pay for the content it takes to run its AI products.

I came across one these ads about AI Theft from the News Media Alliance and followed it to this site about how, News Media Publishers Run Coordinated Ad Campaign Urging Washington to Protect Content From Big Tech and AI. They have three asks:

  • Require Big Tech and AI companies to fairly compensate content creators.
  • Mandate transparency, sourcing, and attribution in AI-generated content
  • Prevent monopolies from engaging in coercive and anti-competitive practices.

Gary Marcus has a substack column on Sam Altman’s attitude problem that talks about Altman’s lack of a response when confronted with an example of what seems like IP theft. I think the positions are hardening as groups begin to use charged language like “theft” for what AI companies are doing.

A Superforecaster’s View on AGI – 3 Quarks Daily

Malcom Murray has a nice discussion about whether we will have AGI by 2030 in 3 Quarks Daily, A Superforecaster’s View on AGI. He first spends some time defining Artificial General Intelligence. He talks about input and output definitions:

  • Input definitions would be based on what the AGI can do.
  • Output definitions would be based on the effect, often economic, of AI.

OpenAI has defined AGI as “a highly autonomous system that outperforms humans at most economically valuable work.” I think this would be an input definition. They and Microsoft are also supposed to have agreed that AGI will have been achieved when a system can generate $100 million on profit. This would be an output definition.

He settles on two forecasting questions:

  • Will there exist by Dec 31, 2030, an AI that is able to do every cognitive digital task equivalently or better than the best human, in an equivalent or shorter time, and for an equivalent or cheaper cost?
  • Will by Dec 31, 2030, the U.S. have seen year-on-year full-year GDP growth rate of 19% or higher?

He believes the answer to the first is affirmative (Yes), but that such an AI will a clean system working in the lab and not one deployed in the real world. The answer to the second question he believes is No because of the friction of the real world. It will take longer to see the deployment that would have such a level of effect on GDP growth.

 

Life, Liberty, and Superintelligence

Are American institutions ready for the AI age?

3QuarksDaily pointed me to an essay in Arena on Life, Liberty, and SuperintelligenceThe essay starts with the question that Dario Amodei tackled in Machines of Loving Grace, namely, what might be the benefits of artificial intelligence (AI). It then questions whether we could actually achieve the potential benefits without the political will and changes needed to nimbly pivot.

Benefits: Amodei outlined a set of domains where intelligence could make a real difference, including:

  • Biology and health,
  • Neuroscience and mind,
  • Economic development and poverty, and
  • Peace and governance.

Amodei concluded with some thoughts on Work and meaning, though the loss of work and meaning may not be a benefit.

It is important that we talk about the benefits as massive investments are made in infrastructure for AI. We should discuss what we think we are going to get other than some very rich people and yet more powerful companies. Discussion of benefits can also balance the extensive documentation of risks.

Institutions: The essay then focuses on whether we could actually see the benefits Amodei outlines even if we get powerful AI. Ball points out that everyone (like JD Vance) believes the USA should lead in AI, but questions if we have the political will and appropriate institutions,

Viewed in this light, the better purpose of “AI policy” is not to create guardrails for AI — though most people agree some guardrails will be needed. Instead, our task is to create the institutions we will need for a world transformed by AI—the mechanisms required to make the most of a novus ordo seclorum. America leads the world in AI development; she must also lead the world in the governance of AI, just as our constitution has lit the Earth for two-and-a-half centuries. To describe this undertaking in shrill and quarrelsome terms like “AI policy” or, worse yet, “AI regulation,” falls far short of the job that is before us.

There could be other countries (read China) who may lag when it comes to innovation, but are better able to deploy and implement the innovations. What sort of institutions and politics does one need to be able to flexibly and ethically redesign civil institutions?

US State Dept. to use AI to Revoke Visas of Foreign Students who appear “pro-Hamas”

Axios has a story about how the State Department is launching a programme to review social media of foreign students to see if they are “pro-Hamas.” If they appear to support Hamas then they may get their visas revoked.

A senior official is quoted as saying “it would be negligent for the department that takes national security seriously to ignore publicly available information about [visa] applicants in terms of AI tools. … AI is one of the resources available to the government that’s very different from where we were technologically decades ago.”

There are obvious free-speech issues, but I also wonder at the use of AI to police speech. What will be policed next? Pro-EDI speech?

Thanks to Gary Marcus’ Substack for this.

Responsible AI Lecture in Delhi

A couple of days ago I gave an Institute Lecture on What is Responsible About Responsible AI at the Indian Institute of Technology Delhi, India. In it I looked at how AI ethics governance is discussed in Canada under the rubric of Responsible AI and AI Safety. I talked about the emergence of AI Safety Institutes like CAISI (Canadian AI Safety Institute.) Just when it seemed that “safety” was the emergent international approach to ethics governance, Vice President JD Lance’s speech at the Paris Summit made it clear that the Trump administration in not interested,

The AI future is not going to be won by hand-wringing about safety. (Vance)

IIT Delhi DH 2025 Winter School

Arjun Ghosh invited me to contribute to the DH 2025 Winter School at IIT Delhi. I’m teaching a 6-day workshop on Voyant as part of this Winter School. You can see my outline here (note that I am still finishing the individual pages.) Some thoughts:

  • There is a real interest in DH in India. Arjun had over 500 applications for 25 places. I doubt we would have that many in Canada.
  • As can be expected, there is a lot of interest handling Indian languages like Hindi or Tamil.
  • There are a number of social scientists at the School. The humanities and social sciences may not be as clearly distinguished here.
  • There was an interesting session on digital libraries given by a data librarian at UPenn.

The Ends of Safety

The AI future is not going to be won by hand-wringing about safety. (Vance)

At the Paris Summit, Vice President JD Vance gave a speech indicating a shift in US policy towards AI including a move away from “safety” as a common ethical ground. He also discussed how:

  • The US intends to dominate in AI, including chip manufacturing, though other countries can join them.
  • They want to focus on opportunity not guardrails.
  • They need reliable energy to power AI, which I take to mean non-green energy.
  • AI should be free of “ideology” by which he probably means “woke” language.

He apparently didn’t stay to hear the response from the Europeans. Nor did the USA or the UK sign the summit Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet that included language saying,

Harnessing the benefits of AI technologies to support our economies and societies depends on advancing Trust and Safety. We commend the role of the Bletchley Park AI Safety Summit and Seoul Summits that have been essential in progressing international cooperation on AI safety and we note the voluntary commitments launched there. We will keep addressing the risks of AI to information integrity and continue the work on AI transparency.

This marks a shift from the consensus that emerged from the Bletchley Declaration from 2023 which led to the formation of the UK AI Safety Institute and the US AI Safety Institute and the Canadian AI Safety Institute (etc.).

With Vance signalling the shift away from safety the UK has renamed its AISI the AI Security Institute (note how they substituted “security” for “safety”). The renamed unit is going to focus more on cybersecurity. It looks like the UK, unlike Europe, is going to try to stay ahead of a Trump ideological turn.

The US AI Safety Institute (AISI), which was set up with a small budget by NIST. is likely to also be affected (or have its acronym changed.)