ASBA Releases Artificial Intelligence Policy Guidance for K-12 Education – Alberta School Boards Association

Alberta School Boards Association (ASBA) is pleased to announce the release of its Artificial Intelligence Policy Guidance. As Artificial Intelligence (AI) continues to shape the future of education, ASBA has […]

The ASBA Releases Artificial Intelligence Policy Guidance for K-12 Education – Alberta School Boards Association. This 14 page Policy document is clear and useful without being proscriptive. It could be a model for other educational organizations. (Note that it was authored by someone I supervised.)

AI for Information Accessibility: From the Grassroots to Policy Action

It’s vital to “keep humans in the loop” to avoid humanizing machine-learning models in research

Today I was part of a panel organized by the Carnegie Council and the UNESCO Information for All Programme Working Group on AI for Information Accessibility: From the Grassroots to Policy Action. We discussed three issues starting with the issue of environmental sustainability and artificial intelligence, then moving to the issue of principles for AI, and finally policies and regulation. I am in awe of the other speakers who were excellent and introduced new ways of thinking about the issues.

Dariia Opryshko, for example, talked about the dangers of how Too Much Trust in AI Poses Unexpected Threats to the Scientific Process. We run the risk of limiting what we think is knowable to what can be researchers by AI. We also run the risk that we trust only research conducted by AI. Alternatively the misuse of AI could lead to science ceasing to be trusted. The Scientific American article linked to above is based on the research published in Nature on Artificial intelligence and illusions of understanding in scientific research.

I talked about the implications of the sorts of regulations we seen in AIDA (AI and Data Act) in C-27. AIDA takes a risk-management approach to regulating AI where they define a class of dangerous AIs called “high-risk” that will be treated differently. This allows the regulation to be “agile” in the sense that it can be adapted to emerging types of AIs. Right now we might be worried about LLMs and misinformation at scale, but five years from now it may be AIs that manage nuclear reactors. The issue with agility is that it will depend on there being government officers who stay on top of the technology or the government will end up relying on the very companies they are supposed to regulate to advise them. We thus need continuous training and experimentation in government for it to be able to regulate in an agile way.

UN launches recommendations for urgent action to curb harm from spread of mis and disinformation and hate speech Global Principles for Information Integrity address risks posed by advances in AI

United Nations, New York, 24 June 2024 – The world must respond to the harm caused by the spread of online hate and lies while robustly upholding human rights, United Nations Secretary- General António Guterres said today at the launch of the United Nations Global Principles for Information Integrity.

The UN has issued a press release announcing that the UN launches recommendations for urgent action to curb harm from spread of mis and disinformation and hate speech Global Principles for Information Integrity address risks posed by advances in AI. This press release marks the launch of the United Nations Global Principles for Information Integrity.

The recommendations in the press release include:

Tech companies should ensure safety and privacy by design in all products, alongside consistent application of policies and resources across countries and languages, with particular attention to the needs of those groups often targeted online. They should elevate crisis response and take measures to support information integrity around elections.

Tech companies should scope business models that do not rely on programmatic advertising and do not prioritize engagement above human rights, privacy, and safety, allowing users greater choice and control over their online experience and personal data.

Advertisers should demand transparency in digital advertising processes from the tech sector to help ensure that ad budgets do not inadvertently fund disinformation or hate or undermine human rights.

Tech companies and AI developers should ensure meaningful transparency and allow researchers and academics access to data while respecting user privacy, commission publicly available independent audits and co-develop industry accountability frameworks.