The New York Times has a nice short video on cybersecurity which is increasingly an issue. One of the things they mention is how it was the USA and Israel that may have opened the Pandora’s box of cyberweapons when they used Stuxnet to damage Iran’s nuclear programme. By using a sophisticated worm first we both legitimized the use of cyberwar against other countries which one is not at war with, and we showed what could be done. This, at least, is the argument of a good book on Stuxnet, Countdown to Zero Day.
Now the problem is that the USA, while having good offensive capability, is also one of the most vulnerable countries because of the heavy use of information technology in all walks of life. How can we defend against the weapons we have let loose?
What is particularly worrisome is that cyberweapons are being designed so that they are hard to trace and subtly disruptive in ways that are short of all out war. We are seeing a new form of hot/cold war where countries harass each other electronically without actually declaring war and getting civilian input. After 2016 all democratic countries need to protect against electoral disruption which then puts democracies at a disadvantage over closed societies.
Geoff McMaster of the Folio (U of A’s news site) wrote a nice article about how Making AI accountable easier said than done, says U of A expert. The article quotes me on on accountability and artificial intelligence. What we didn’t really talk about is forms of accountability for automata including:
Explainability – Can someone get an explanation as to how and why an AI made a decision that affects them? If people can get an explanation that they can understand then they can presumably take remedial action and hold someone or some organization accountable.
Transparency – Is an automated decision making process fully transparent so that it can be tested, studied and critiqued? Transparency is often seen as a higher bar for an AI to meet than explainability.
Responsibility – This is the old computer ethics question that focuses on who can be held responsible if a computer or AI harms someone. Who or what is held to account?
In all these cases there is a presumption of process both to determine transparency/responsibility and to then punish or correct for problems. Otherwise people will have no real recourse.
One of issues that interests me the most now is the history of this discussion. We tend to treat the ethics of AI as a new issue, but people have been thinking about how automation would affect people for some time. There have been textbooks for teaching Computer Ethics like that of Deborah G. Johnson since the 1980s. As part of research we did on how computer were presented in the news we found articles in the 1960s about how automation might put people out of work. They weren’t thinking of AI then, but the ethical and social effects that concerned people back then were similar. What few people discussed, however, was how automation affected different groups differently. Michele Landsberg wrote a prescient article on “Will Computer Replace the Working Girl?” in 1964 for the women’s section of The Globe and Mail that argued that is was women in the typing pools that were being put out of work. Likewise I suspect that some groups be more affected by AI than others and that we need to prepare for that.
Addressing the issue of how universities might prepare for the disruption of artificial intelligence is a good book, Robot-Proof: Higher Education in the Age of Artificial Intelligence by Joseph Aoun (MIT Press, 2017).
Instead of educating college students for jobs that are about to disappear under the rising tide of technology, twenty-first-century universities should liberate them from outdated career models and give them ownership of their own futures. They should equip them with the literacies and skills they need to thrive in this new economy defined by technology, as well as continue providing them with access to the learning they need to face the challenges of life in a diverse, global environment.
Anatomy of an AI System – The Amazon Echo as an anatomical map of human labor, data and planetary resources. By Kate Crawford and Vladan Joler (2018)
Kate Crawford and Vladan Joler have created a powerful infographic and web site, Anatomy of an AI System. The dark illustration and site are an essay that starts with the Amazon Echo and then sketches out the global anatomy of this apparently simple AI appliance. They do this by looking at where the materials come from, where the labour comes from (and goes), and the underlying infrastructure.
Put simply: each small moment of convenience – be it answering a question, turning on a light, or playing a song – requires a vast planetary network, fueled by the extraction of non-renewable materials, labor, and data.
The essay/visualization is a powerful example of how we can learn by critically examining the technologies around us.
Just as the Greek chimera was a mythological animal that was part lion, goat, snake and monster, the Echo user is simultaneously a consumer, a resource, a worker, and a product.
For Facebook, Google, and Twitter the fight against fake news seems to be two-pronged: De-incentivize the targeted content and provide avenues to correct factual inaccuracies. These are both surface fixes, however, akin to putting caulk on the Grand Canyon.
And, despite grand hand waves, both approaches are reactive. They don’t aim at understanding how this problem became prevalent, or creating a method that attacks the systemic issue. Instead these advertising giants implement new mechanisms by which people can report one-off issues—and by which the platforms will be left playing cat-and-mouse games against fake news—all the while giving no real clear glimpse into their opaque ad platforms.
The problem is that these companies make too much money from ads and elections are a chance to get lots of ads, manipulative or not. For that matter, what political ad doesn’t try to manipulate viewers?
The slashdot story was actually about Mozilla’s Responsible Computer Science Challenge which will support initiatives to embedd ethics in computer science courses. Alas, the efficacy of ethics courses is questionable. Aristotle would say that if you don’t have the disposition to be ethical no amount of training would do any good. It just helps the unethical pretend to be ethical.
Waymo, the Google spin-off, is bringing autonomous taxis to Phoenix this fall. Other companies are developing shuttles and other types of pods that work, Self-driving pods are slow, boring, and weird-looking — and that’s a good thing. It seems to me that there hasn’t really been a discussion about what would benefit society. Companies will invest in where they see economic opportunity; but what should we as a society do with such technology? At the moment the technology seems to be used either in luxury cars to provide assistance to the driver or imagined to replace taxi and Uber drivers. What will happen to these drivers?
The death of a woman hit by a self-driving car highlights an unfolding technological crisis, as code piled on code creates ‘a universe no one fully understands’
The Guardian has a good essay by Andrew Smith about Franken-algorithms: the deadly consequences of unpredictable code. The essay starts with the obvious problems of biased algorithms like those documented by Cathy O’Neil in Weapons of Math Destruction. It then goes further to talk about cases where algorithms are learning on the fly or are so complex that their behaviour becomes unpredictable. An example is high-frequency trading algorithms that trade on the stock market. These algorithmic traders try to outwit each other and learn which leads to unpredictable “flash crashes” when they go rogue.
The problem, he (George Dyson) tells me, is that we’re building systems that are beyond our intellectual means to control. We believe that if a system is deterministic (acting according to fixed rules, this being the definition of an algorithm) it is predictable – and that what is predictable can be controlled. Both assumptions turn out to be wrong.
The good news is that, according to one of the experts consulted this could lead to “a golden age for philosophy” as we try to sort out the ethics of these autonomous systems.
Google CEO Sundar Pichai milked the woos from a clappy, home-turf developer crowd at its I/O conference in Mountain View this week with a demo of an in-the-works voice assistant feature that will e…
A number of venues, including TechCruch have discussed the recent Google demonstration of an intelligent agent Duplex who can make appointments. Many of the stories note how Duplex shows Google failing at ethical and creative AI design. The problem is that the agent didn’t (at least during the demo) identify as a robot. Instead it appeared to deceive the person it was talking to. As the TechCrunch article points out, there is really no good reason to deceive if the purpose is to make an appointment.
What I want to know is what are the ethics of dealing with a robot? Do we need to identify as human to the robot? Do we need to be polite and give them the courtesy that we would a fellow human? Would it be OK for me to hang up as I do on recorded telemarketing calls? Most of us have developed habits of courtesy when dealing with people, including strangers, that the telemarketers take advantage of in their scripts. Will the robots now take advantage of that? Or, to be more precise, will those that use the robots to save their time take advantage of us?
A second question is how Google considers the ethical implications of their research? It is easy to castigate them for this demonstration, but the demonstration tells us nothing about a line of research that has been going on for a while and what processes Google may have in place to check the ethics of what they do. As companies explore the possibilities for AI, how are they to check their ethics in the excitement of achievement?
We are also deeply concerned about the possible integration of Google’s data on people’s everyday lives with military surveillance data, and its combined application to targeted killing. Google has moved into military work without subjecting itself to public debate or deliberation, either domestically or internationally. While Google regularly decides the future of technology without democratic public engagement, its entry into military technologies casts the problems of private control of information infrastructure into high relief.
Information Wants to Be Free, Or Does It? The Ethics of Datafication has just come out in the Electronic Book Review. This article was written with Bettina Berendt at KU Leuven and is about thinking about the ethics of digitization. The article first looks at the cliche phrase “information wants to be free” and then moves on to survey a number of arguments why some things should be digitized.