Canadian AI 2024 Conference

I’m at the Canadian AI 2024 Conference where I will be on a panel about “The Future of Responsible AI and AI for Social Good in Canada” on Thursday. This panel is timely given that we seem to be seeing a sea-change in AI regulation. If initially there was a lot of talk about the dangers (to innovation) of regulation, we now have large players like China, the US and the EU introducing regulations.

  • President Biden has issued an “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” It is unlikely that Biden can get legislation through Congress so he is issuing executive orders.
  • The EU recently passed their AI Act which is risk-based.
  • The Canadian AI and Data Act (AIDA) is coming and is similarly risk-based.

In light of AIDA I would imagine that the short-term future for Responsible AI in Canada might include the following:

  • Debate about AIDA and amendments to align it with other jurisdictions and to respond to industry concerns. Will there be a more inclusive consultation?
  • Attempts to better define what are high-impact AIs so as to better anticipate what will need onerous documentation and assessment.
  • Elaboration of how to best run an impact assessment.
  • Discussions around responsibility and how to assign it in different situations.

I hope there will also be a critical exploration of the assumptions and limits of responsible AI.

OpenAI Board Forms Safety and Security Committee

OpenAI has announced the formation of a Safety and Security Committee. See OpenAI Board Forms Safety and Security Committee. This is after Ilya Sutskever and Jan Leike, who were the co-leads of the SuperAlignment project left.

What is notable is that this Committee is an offshoot of the board and has 4 board members on it, including Sam Altman. It sounds to me that the board will keep it on a short leash. I doubt it will have the independence to stand up to the board.

I also note that OpenAI’s ethics is no longer about “superalignment”, but is now about safety and security. With Sutskever and Leike gone they are no longer trying to build an AI to align superintelligences. OpenAI claims that their mission is “to ensure that artificial general intelligence benefits all of humanity” but how will they ensure such beneficence? They no longer have any like an open ethics strategy. Its now just safety and security.

I should add that OpenAI shared a safety update as part of the AI Seoul Summit.