Life, Liberty, and Superintelligence

Are American institutions ready for the AI age?

3QuarksDaily pointed me to an essay in Arena on Life, Liberty, and SuperintelligenceThe essay starts with the question that Dario Amodei tackled in Machines of Loving Grace, namely, what might be the benefits of artificial intelligence (AI). It then questions whether we could actually achieve the potential benefits without the political will and changes needed to nimbly pivot.

Benefits: Amodei outlined a set of domains where intelligence could make a real difference, including:

  • Biology and health,
  • Neuroscience and mind,
  • Economic development and poverty, and
  • Peace and governance.

Amodei concluded with some thoughts on Work and meaning, though the loss of work and meaning may not be a benefit.

It is important that we talk about the benefits as massive investments are made in infrastructure for AI. We should discuss what we think we are going to get other than some very rich people and yet more powerful companies. Discussion of benefits can also balance the extensive documentation of risks.

Institutions: The essay then focuses on whether we could actually see the benefits Amodei outlines even if we get powerful AI. Ball points out that everyone (like JD Vance) believes the USA should lead in AI, but questions if we have the political will and appropriate institutions,

Viewed in this light, the better purpose of “AI policy” is not to create guardrails for AI — though most people agree some guardrails will be needed. Instead, our task is to create the institutions we will need for a world transformed by AI—the mechanisms required to make the most of a novus ordo seclorum. America leads the world in AI development; she must also lead the world in the governance of AI, just as our constitution has lit the Earth for two-and-a-half centuries. To describe this undertaking in shrill and quarrelsome terms like “AI policy” or, worse yet, “AI regulation,” falls far short of the job that is before us.

There could be other countries (read China) who may lag when it comes to innovation, but are better able to deploy and implement the innovations. What sort of institutions and politics does one need to be able to flexibly and ethically redesign civil institutions?

ASBA Releases Artificial Intelligence Policy Guidance for K-12 Education – Alberta School Boards Association

Alberta School Boards Association (ASBA) is pleased to announce the release of its Artificial Intelligence Policy Guidance. As Artificial Intelligence (AI) continues to shape the future of education, ASBA has […]

The ASBA Releases Artificial Intelligence Policy Guidance for K-12 Education – Alberta School Boards Association. This 14 page Policy document is clear and useful without being proscriptive. It could be a model for other educational organizations. (Note that it was authored by someone I supervised.)

AI for Information Accessibility: From the Grassroots to Policy Action

It’s vital to “keep humans in the loop” to avoid humanizing machine-learning models in research

Today I was part of a panel organized by the Carnegie Council and the UNESCO Information for All Programme Working Group on AI for Information Accessibility: From the Grassroots to Policy Action. We discussed three issues starting with the issue of environmental sustainability and artificial intelligence, then moving to the issue of principles for AI, and finally policies and regulation. I am in awe of the other speakers who were excellent and introduced new ways of thinking about the issues.

Dariia Opryshko, for example, talked about the dangers of how Too Much Trust in AI Poses Unexpected Threats to the Scientific Process. We run the risk of limiting what we think is knowable to what can be researchers by AI. We also run the risk that we trust only research conducted by AI. Alternatively the misuse of AI could lead to science ceasing to be trusted. The Scientific American article linked to above is based on the research published in Nature on Artificial intelligence and illusions of understanding in scientific research.

I talked about the implications of the sorts of regulations we seen in AIDA (AI and Data Act) in C-27. AIDA takes a risk-management approach to regulating AI where they define a class of dangerous AIs called “high-risk” that will be treated differently. This allows the regulation to be “agile” in the sense that it can be adapted to emerging types of AIs. Right now we might be worried about LLMs and misinformation at scale, but five years from now it may be AIs that manage nuclear reactors. The issue with agility is that it will depend on there being government officers who stay on top of the technology or the government will end up relying on the very companies they are supposed to regulate to advise them. We thus need continuous training and experimentation in government for it to be able to regulate in an agile way.

UN launches recommendations for urgent action to curb harm from spread of mis and disinformation and hate speech Global Principles for Information Integrity address risks posed by advances in AI

United Nations, New York, 24 June 2024 – The world must respond to the harm caused by the spread of online hate and lies while robustly upholding human rights, United Nations Secretary- General António Guterres said today at the launch of the United Nations Global Principles for Information Integrity.

The UN has issued a press release announcing that the UN launches recommendations for urgent action to curb harm from spread of mis and disinformation and hate speech Global Principles for Information Integrity address risks posed by advances in AI. This press release marks the launch of the United Nations Global Principles for Information Integrity.

The recommendations in the press release include:

Tech companies should ensure safety and privacy by design in all products, alongside consistent application of policies and resources across countries and languages, with particular attention to the needs of those groups often targeted online. They should elevate crisis response and take measures to support information integrity around elections.

Tech companies should scope business models that do not rely on programmatic advertising and do not prioritize engagement above human rights, privacy, and safety, allowing users greater choice and control over their online experience and personal data.

Advertisers should demand transparency in digital advertising processes from the tech sector to help ensure that ad budgets do not inadvertently fund disinformation or hate or undermine human rights.

Tech companies and AI developers should ensure meaningful transparency and allow researchers and academics access to data while respecting user privacy, commission publicly available independent audits and co-develop industry accountability frameworks.