Self-driving pods are slow, boring, and weird-looking — and that’s a good thing

Driverless pods, retirement communities, and grocery delivery

Autonomous vehicles are here! That’s the message from a panel on AI and Transportation I listened to at the International Symposium on Applications of Artificial Intelligence held here at the University of Alberta.

Waymo, the Google spin-off, is bringing autonomous taxis to Phoenix this fall. Other companies are developing shuttles and other types of pods that work,  Self-driving pods are slow, boring, and weird-looking — and that’s a good thingIt seems to me that there hasn’t really been a discussion about what would benefit society. Companies will invest in where they see economic opportunity; but what should we as a society do with such technology? At the moment the technology seems to be used either in luxury cars to provide assistance to the driver or imagined to replace taxi and Uber drivers. What will happen to these drivers?

AI Weirdness

I just came across a neat site called AI Weirdness. The site describes all sorts of “weird” experiments in learning neural networks. Some examples:

The site has a nice FAQ that describes her tools and how to learn how to do it.

Franken-algorithms: the deadly consequences of unpredictable code

The death of a woman hit by a self-driving car highlights an unfolding technological crisis, as code piled on code creates ‘a universe no one fully understands’

The Guardian has a good essay by Andrew Smith about Franken-algorithms: the deadly consequences of unpredictable code. The essay starts with the obvious problems of biased algorithms like those documented by Cathy O’Neil in Weapons of Math Destruction. It then goes further to talk about cases where algorithms are learning on the fly or are so complex that their behaviour becomes unpredictable. An example is high-frequency trading algorithms that trade on the stock market. These algorithmic traders try to outwit each other and learn which leads to unpredictable “flash crashes” when they go rogue.

The problem, he (George Dyson) tells me, is that we’re building systems that are beyond our intellectual means to control. We believe that if a system is deterministic (acting according to fixed rules, this being the definition of an algorithm) it is predictable – and that what is predictable can be controlled. Both assumptions turn out to be wrong.

The good news is that, according to one of the experts consulted this could lead to “a golden age for philosophy” as we try to sort out the ethics of these autonomous systems.

Writing with the machine

“…it’s like writing with a deranged but very well-read parrot on your shoulder.”

Robin Sloan of Mr. Penumbra’s 24-hour Bookstore fame, has been talking about Writing with the machine. He was inspired by presentations like Adrej Karpathy’s blog post on The Unreasonable Effectiveness of Recurrent Neural Networks and Bowman et al’s Generating Sentences from a Continuous Space to try developing a neural net that could generate text. He used as a training corpus a collection of early science-fiction from the Internet Archive and created different text generation tools like the short video of that which you can see above and hear explained in this Eyeo video.

One of the points he emphasizes is that he didn’t do this just for the fun of seeing strange phrases generated, but wants to use it seriously as a writing aide.

I can’t help wondering if this could be used philosophically. Could we generate philosophical or ethical phrases in response to questions?

Re-Imagining Education In An Automating World conference at George Brown

On May 25th I had a chance to attend a gem of a conference organized the Philosophy of Education (POE) committee at George Brown. They organized a conference with different modalities from conversations to formal talks to group work. The topic was Re-Imagining Education in An Automating World (see my conference notes here) and this conference is a seed for a larger one next year.

I gave a talk on Digital Citizenship at the end of the day where I tried to convince people that:

  • Data analytics are now a matter of citizenship (we all need to understand how we are being manipulated).
  • We therefore need to teach data literacy in the arts and humanities, so that
  • Students are prepared to contribute to and critique the ways analytics are used deployed.
  • This can be done by integrating data and analytical components in any course using field-appropriate data.

 

Duplex shows Google failing at ethical and creative AI design

Google CEO Sundar Pichai milked the woos from a clappy, home-turf developer crowd at its I/O conference in Mountain View this week with a demo of an in-the-works voice assistant feature that will e…

A number of venues, including TechCruch have discussed the recent Google demonstration of an intelligent agent Duplex who can make appointments. Many of the stories note how Duplex shows Google failing at ethical and creative AI design. The problem is that the agent didn’t (at least during the demo) identify as a robot. Instead it appeared to deceive the person it was talking to. As the TechCrunch article points out, there is really no good reason to deceive if the purpose is to make an appointment.

What I want to know is what are the ethics of dealing with a robot? Do we need to identify as human to the robot? Do we need to be polite and give them the courtesy that we would a fellow human? Would it be OK for me to hang up as I do on recorded telemarketing calls? Most of us have developed habits of courtesy when dealing with people, including strangers, that the telemarketers take advantage of in their scripts. Will the robots now take advantage of that? Or, to be more precise, will those that use the robots to save their time take advantage of us?

A second question is how Google considers the ethical implications of their research? It is easy to castigate them for this demonstration, but the demonstration tells us nothing about a line of research that has been going on for a while and what processes Google may have in place to check the ethics of what they do. As companies explore the possibilities for AI, how are they to check their ethics in the excitement of achievement?

I should note that Google’s parent Alphabet has apparently dropped the “Don’t be evil” motto from their code of conduct. There has also been news about how a number of employees quit over a Google program to apply machine learning to drone footage for the military.  This is after over 3000 Google employees signed a letter taking issue with the project. See also the Open Letter in Support of Google Employees and Tech Workers that researchers signed. As they say:

We are also deeply concerned about the possible integration of Google’s data on people’s everyday lives with military surveillance data, and its combined application to targeted killing. Google has moved into military work without subjecting itself to public debate or deliberation, either domestically or internationally. While Google regularly decides the future of technology without democratic public engagement, its entry into military technologies casts the problems of private control of information infrastructure into high relief.

 

Google AI experiment has you talking to books

Google has announced some cool text projects. See Google AI experiment has you talking to books. One of them, Talk to Books, lets you ask questions or type statements and get answers that are passages from books. This strikes me as a useful research tool as it allows you to see some (book) references that might be useful for defining an issue. The project is somewhat similar to the Veliza tool that we built into Voyant. Veliza is given a particular text and then uses an Eliza-like algorithm to answer you with passages from the text. Needless to say, Talking to Books is far more sophisticated and is not based simply on word searches. Veliza, on the other hand can be reprogrammed and you can specify the text to converse with.

Continue reading Google AI experiment has you talking to books

After the Facebook scandal it’s time to base the digital economy on public v private ownership of data

In a nutshell, instead of letting Facebook get away with charging us for its services or continuing to exploit our data for advertising, we must find a way to get companies like Facebook to pay for accessing our data – conceptualised, for the most part, as something we own in common, not as something we own as individuals.

Evgeny Morozov has a great essay in The Guardian on how After the Facebook scandal it’s time to base the digital economy on public v private ownership of data. He argues that better data protection is not enough. We need to “to articulate a truly decentralised, emancipatory politics, whereby the institutions of the state (from the national to the municipal level) will be deployed to recognise, create, and foster the creation of social rights to data.” In Alberta that may start with a centralized clinical information system called Connect Care managed by the Province. The Province will presumably control access to our data to those researchers and health-care practitioners that commit to using access appropriately. Can we imagine a model where Connect Care is expanded to include social data that we can then control and give others (businesses) access to?

Distant Reading after Moretti

The question I want to explore today is this: what do we do about distant reading, now that we know that Franco Moretti, the man who coined the phrase “distant reading,” and who remains its most famous exemplar, is among the men named as a result of the #MeToo movement.

Lauren Klein has posted an important blog entry on Distant Reading after MorettiThis essay is based on a talk delivered at the 2018 MLA convention for a panel on Varieties of Digital Humanities. Klein asks about distant reading and whether it shelters sexual harassment in some way. She asks us to put not just the persons, but the structures of distant reading and the digital humanities under investigation. She suggests that it is “not a coincidence that distant reading does not deal well with gender, or with sexuality, or with race.” One might go further and ask if the same isn’t true of the digital humanities in general or the humanities, for that matter. Klein then suggests some thing we can do about it:

  • We need more accessible corpora that better represent the varieties of human experience.
  • We need to question our models and ask about what is assumed or hidden.

 

 

What a fossil revolution reveals about the history of ‘big data’

Example of Heinrich Georg Bronn’s Spindle Diagram

David Sepkoski has published a nice essay in Aeon about What a fossil revolution reveals about the history of ‘big data’. Sepkoski talks about his father (Jack Sepkoski), a paleontologist, who developed the first database to provide a comprehensive record of fossils. This data was used to interpret the fossil record differently. The essay argues that it changed how we “see” data and showed that there had been mass extinctions before (and that we might be in one now).

The analysis that he and his colleagues performed revealed new understandings of phenomena such as diversification and extinction, and changed the way that palaeontologists work.

Sepkoski (father) and colleagues

The essay then makes the interesting move of arguing that, in fact, Jack Sepkoski was not the first to do quantitative palaeontology. The son, a historian, argues that Heinrich Georg Bronn in the 19th century was collecting similar data on paper and visualizing it (see spindle diagram above), but his approach didn’t take.

This raises the question of why Sepkoski senior’s data-driven approach changed palaeontology while Bronn’s didn’t. Sepkoski junior’s answer is a combination of changes. First, that palaeontology became more receptive to ideas like Stephen Jay Gould’s “punctuated equillibrium” that challenged Darwin’s gradualist view. Second, that culture has become more open to data-driven approaches and the interpretation visualizations needed to grasp such approaches.

The essay concludes by warning us about the dangers of believing data black boxes and visualizations that you can’t unpack.

Yet in our own time, it’s taken for granted that the best way of understanding large, complex phenomena often involves ‘crunching’ the numbers via computers, and projecting the results as visual summaries.

That’s not a bad thing, but it poses some challenges. In many scientific fields, from genetics to economics to palaeobiology, a kind of implicit trust is placed in the images and the algorithms that produce them. Often viewers have almost no idea how they were constructed.

This leads me to ask about the warning as gesture. This is a gesture we see more and more, especially about the ethics of big data and about artificial intelligence. No thoughtful person, including myself, has not warned people about the dangers of these apparently new technologies. But what good are these warnings?

Johanna Drucker in Graphesis proposes what to my mind is a much healthier approach to the dangers and opportunities of visualization. She does what humanists do, she asks us to think of visualization as interpretation. If you think of it this way than it is no more or less dangerous than any other interpretation. And, we have the tools to think-through visualization. She shows us how to look at the genealogy of different types of visualization. She shows us how all visualizations are interpretations and therefore need to be read. She frees us to be interpretative with our visualizations. If they are made by the visualizer and are not given by the data as by Moses coming down the mountain, then they are an art that we can play with and through. This is what the 3DH project is about.