Last week I attended the CIFAR and Amii Summer Institute on AI and Society. This brought together a group of faculty and new scholars to workshop ideas about AI, Ethics and Society. You can see conference notes here on philosophi.ca. Some of the interventions that struck me included:
- Rich Sutton talked about how AI is a way of trying to understand what it is to be human. He defined intelligence as the ability to achieve goals in the world. Reinforcement learning is a form of machine learning configured to achieve autonomous AI and is therefore more ambitious and harder, but also will get us closer to intelligence. RL uses value functions to map states to values; bots then try to maximize rewards (states that have value). It is a simplistic, but powerful idea about intelligence.
- Jason Millar talked about autonomous vehicles and how right now mobility systems like Google Maps have only one criteria for planning a route for you, namely time to get there. He asked what it would look like to have other criteria like how difficult the driving would be, or the best view, or the least bumps. He wants the mobility systems being developed to be open to different values. These systems will become part of our mobility infrastructure.
After a day of talks, during which I gave a talk about the history of discussions about autonomy, we had a day and a half of workshops where groups formed and developed things. I was part of a team that developed a critique of the EU Guidelines for Trustworthy AI.