Blanket no-punching policies are useless in a world full of terrible people with even worse ideas. That’s even more true in a world where robots now do those people’s bidding. Robots, like people, are not all the same. While K5 can’t threaten bodily harm, the data it collects can cause real problems, and the social position it puts us all in—ever suspicious, ever watched, ever worried about what might be seen—is just as scary. At the very least, it’s a relationship we should question, not blithely accept as K5 rolls by.
The question, “Is it OK to kick a robot?” is a good one that nicely brings the ethics of human-robot co-existence down to earth and onto our side walks. What sort of respect do the proliferation of robots and scooters deserve? How should we treat these stupid things when enter our everyday spaces?
Rich Sutton talked about how AI is a way of trying to understand what it is to be human. He defined intelligence as the ability to achieve goals in the world. Reinforcement learning is a form of machine learning configured to achieve autonomous AI and is therefore more ambitious and harder, but also will get us closer to intelligence. RL uses value functions to map states to values; bots then try to maximize rewards (states that have value). It is a simplistic, but powerful idea about intelligence.
Jason Millar talked about autonomous vehicles and how right now mobility systems like Google Maps have only one criteria for planning a route for you, namely time to get there. He asked what it would look like to have other criteria like how difficult the driving would be, or the best view, or the least bumps. He wants the mobility systems being developed to be open to different values. These systems will become part of our mobility infrastructure.
After a day of talks, during which I gave a talk about the history of discussions about autonomy, we had a day and a half of workshops where groups formed and developed things. I was part of a team that developed a critique of the EU Guidelines for Trustworthy AI.
Exploring through Markup: Recovering COCOA. This paper looked at an experimental Voyant tool that allows one to use COCOA markup as a way of exploring a text in different ways. COCOA markup is a simple form of markup that was superseded by XML languages like those developed with the TEI. The paper recovered some of the history of markup and what we may have lost.
Designing for Sustainability: Maintaining TAPoR and Methodi.ca. This paper was presented by Holly Pickering and discussed the processes we have set up to maintain TAPoR and Methodi.ca.
Our team also had two posters, one on “Generative Ethics: Using AI to Generate” that showed a toy that generates statements about artificial intelligence and ethics. The other, “Discovering Digital Methods: An Exploration of Methodica for Humanists” showed what we are doing with Methodi.ca.
Needless to say, it raises ethical issues around community policing. Ring has a “Neighbors” app that lets vigilantes report suspicious behaviour creating a form of digital neighbourhood watch. The article references a Motherboard article that suggests that such digital neighbourhood surveillance can lead to racism.
Beyond creating a “new neighborhood watch,” Amazon and Ring are normalizing the use of video surveillance and pitting neighbors against each other. Chris Gilliard, a professor of English at Macomb Community College who studies institutional tech policy, told Motherboard in a phone call that such a “crime and safety” focused platforms can actively reinforce racism.
All we need now is for there to be AI in the mix. Face recognition so you can identify anyone walking past your door.
The conference was opened by Reuben Quinn whose grandfather signed Treaty 6. He challenged us to think about what labels and labelling mean. Later Kim Tallbear challenged us to think about how we want the encounter with other intelligences to go. We don’t have a good track record of encountering the other and respecting intelligence. Now is the time to think about our positionality and to develop protocols for encounters. We should also be open to different forms of intelligence, not just ours.
The research concludes that although welfare reform may be leading to cost savings for the Department of Human Services (DHS), substantial costs are being shifted to vulnerable customers and the community services that support them. It is they that are paying the price of welfare reform.
The law has not caught up. In the United States, the use of facial recognition is almost wholly unregulated.
The New York Times has an opinion piece by Sahil Chinoy on how (they) We Built a (Legal) Facial Recognition Machine for $60. They describe an inexpensive experiment they ran where they took footage of people walking past some cameras installed in Bryant Park and compared them to known people who work in the area (scraped from web sites of organizations that have offices in the neighborhood.) Everything they did used public resources that others could use. The cameras stream their footage here. Anyone can scrape the images. The image database they gathered came from public web sites. The software is a service (Amazon’s Rekognition?) The article asks us to imagine the resources available to law enforcement.
I’m intrigued by how this experiment by the New York Times. It is a form of design thinking where they have designed something to help us understand the implications of a technology rather than just writing about what others say. Or we could say it is a form of journalistic experimentation.
Why does facial recognition spook us? Is recognizing people something we feel is deeply human? Or is it the potential for recognition in all sorts of situations. Do we need to start guarding our faces?
Facial recognition is categorically different from other forms of surveillance, Mr. Hartzog said, and uniquely dangerous. Faces are hard to hide and can be observed from far away, unlike a fingerprint. Name and face databases of law-abiding citizens, like driver’s license records, already exist. And for the most part, facial recognition surveillance can be set up using cameras already on the streets.
This is one of a number of excellent articles by the New York Times that is part of their Privacy Project.
When it comes to the crucial ethical question of calling one’s mother, most people agreed that not doing so was a moral failing.
Quartz reports on a study in Philosophical Psychology thatEthicists are no more ethical than the rest of us, study finds — Quartz. While one wonders how one can survey how ethical someone is, this is nonetheless a believable result. The contemporary university is structured deliberately not to be a place to change people’s morals, but to educate them. When we teach ethics we don’t assess or grade the morality of the student. Likewise, when we hire, promote, and assess the ethics of a philosophy professor we also don’t assess their personal morality. We assess their research, teaching and service record, all of which can be burnished without actually being ethical. There is, if you will, a professional ethic that research and teaching should not be personal, but be detached.
A focus on the teaching and learning of ethics over personal morality is, despite the appearance of hypocrisy, a good thing. We try to create in the university, in the class, and in publications, an openness to ideas, whoever they come from. By avoiding discussing personal morality we try to create a space where people of different views can enter into dialogue about ethics. Imagine what it would be like if it were otherwise? Imagine if my ethics class was about converting students to some standard of behaviour. Who would decide what that standard was? The ethos of professional ethics is one that emphasizes dialogue over action, history over behaviour, and ethical argumentation over disposition. Would it be ethical any other way?
This paper uses frame analysis to examine recent high-profile values statements endorsing ethical design for artificial intelligence and machine learning (AI/ML). Guided by insights from values in design and the sociology of business ethics, we uncover the grounding assumptions and terms of debate that make some conversations about ethical design possible while forestalling alternative visions. Vision statements for ethical AI/ML co-opt the language of some critics, folding them into a limited, technologically deterministic, expert-driven view of what ethical AI/ML means and how it might work.
I get the feeling that various outfits (of experts) are trying to define what ethics in AI/ML is rather then engaging in a dialogue. There is a rush to be the expert on ethics. Perhaps we should imagine a different way of developing an ethical consensus.
For that matter, is there room for critical positions? What it would mean to call for a stop all research into AI/ML as unethical until proven otherwise? Is that even thinkable? Can we imagine another way that the discourse of ethics might play out?