Duplex shows Google failing at ethical and creative AI design

Google CEO Sundar Pichai milked the woos from a clappy, home-turf developer crowd at its I/O conference in Mountain View this week with a demo of an in-the-works voice assistant feature that will e…

A number of venues, including TechCruch have discussed the recent Google demonstration of an intelligent agent Duplex who can make appointments. Many of the stories note how Duplex shows Google failing at ethical and creative AI design. The problem is that the agent didn’t (at least during the demo) identify as a robot. Instead it appeared to deceive the person it was talking to. As the TechCrunch article points out, there is really no good reason to deceive if the purpose is to make an appointment.

What I want to know is what are the ethics of dealing with a robot? Do we need to identify as human to the robot? Do we need to be polite and give them the courtesy that we would a fellow human? Would it be OK for me to hang up as I do on recorded telemarketing calls? Most of us have developed habits of courtesy when dealing with people, including strangers, that the telemarketers take advantage of in their scripts. Will the robots now take advantage of that? Or, to be more precise, will those that use the robots to save their time take advantage of us?

A second question is how Google considers the ethical implications of their research? It is easy to castigate them for this demonstration, but the demonstration tells us nothing about a line of research that has been going on for a while and what processes Google may have in place to check the ethics of what they do. As companies explore the possibilities for AI, how are they to check their ethics in the excitement of achievement?

I should note that Google’s parent Alphabet has apparently dropped the “Don’t be evil” motto from their code of conduct. There has also been news about how a number of employees quit over a Google program to apply machine learning to drone footage for the military.  This is after over 3000 Google employees signed a letter taking issue with the project. See also the Open Letter in Support of Google Employees and Tech Workers that researchers signed. As they say:

We are also deeply concerned about the possible integration of Google’s data on people’s everyday lives with military surveillance data, and its combined application to targeted killing. Google has moved into military work without subjecting itself to public debate or deliberation, either domestically or internationally. While Google regularly decides the future of technology without democratic public engagement, its entry into military technologies casts the problems of private control of information infrastructure into high relief.

 

John Stuart Mill marginalia project

Project to digitise and publish his marginalia online will allow scholars to see his cutting remarks on Ralph Waldo Emerson

The Guardian has a story on an interesting digital humanities project, JS Mill scribbles reveal he was far from a chilly Victorian intellectualThe project, Mill Marginalia Online, is digitizing an estimated 40,000 comments, doodles, and other marks that John Stuart Mill wrote in his collection of 1,700 books, now at Somerville College, Oxford. His collection was donated to Somerville 30 years after his death in 1905 because the women of the college weren’t allowed to access the Oxford libraries at the time.

His comments are not just scholarly notes. For example, above is an image of the title page of Emerson’s Essays that Mill added text to in order to mock it. The new title page with Mill’s penciled in elaboration and the original reads,

Philosophy Bourgeois,
being
Sentimental Essays: in the art of
Intimately blending
Sense and Nonsense:
by
R. W. Emerson,
of Concord, Massachusetts.
A clever + well organised youth brought up
in the old traditions.
Motto
In thought “all’s fish that comes to net.”
With Fog Preface
By Thomas Carlyle.
“Patent Divine-light Self-acting Foggometer”
To the Court of
Her mAJESTy Queen Vic.

A JEST indeed. The Daily Nous has an article on this with the title, Mill’s Myriad Marginalia: Mundane, Mysterious, Mocking.

All this from Humanist.

 

Sustainable Research: Around the World Conference

This week I am participating in the 6th Around the World Conference organized by the Kule Institute for Advanced Study.  This e-conference (electronic conference) is on Sustainable Research and we have a panel on a different topic every day of the week. (If you miss a panel, check out our YouTube channel.) Today we had a fabulous panel on Art and/in the Anthropocene that was led by Natalie Loveless and Jesse Beier. You can see some thoughts on the e-conference under the Twitter hastag #ATW2018, which we share with the American Trombone Workshop.

Manifest Attention

One of the problems with e-conferences is that they are local for everyone which means that everyone tunes in and out depending on what they have scheduled rather than devoting the time. When you fly to a conference you can’t be expected to leave the conference for a meeting, but when a conference is local or online we tend to not pay attention as we would when afar.

This has to change if we are to wean ourselves of flying any time we want to pay attention to a conference. We have to learn to be deliberate about allocating time to an e-conference. We have to manifest attention.

CBC TV: The Artists

The Artists is the story of the creators that were at the forefront of the early video game revolution. It explores the first three decades of video game history.

I just finished watching the CBC TV series, The Artists – Season 1. This is a series of short (9 – 12 minute) video essays that you can watch off the web. The series focuses on the issue of game designers as artists and starts and ends with an early and influential ad, We see farther, that EA (Electronic Arts) ran that showcased their developers, something other companies (like Atari) didn’t do.

The CBC series is well done, though I find the shorts too short. I wish they lingered a bit more over the clips of games and other historic materials. The kinetic style of the shorts may suit the medium, but not the history.

My other gripe is their choice of game designers to feature. There are no Japanese game designers. In fact, it is as if no one outside of the US and Canada designed games at all. They could have also covered some influential women designers like Brenda Laurel.

What is great, is episode 9 on Bioware (and Edmonton!) I didn’t realize that Greg Zeschuk, one of the founders of Bioware, started the Blind Enthusiasm Brewing Company which has a beer brewery and restaurant near my house.

Dan Hett’s game “c ya laterrrr”

c ya laterrrr is text “game” by Dan Hett to document his experience after the Manchester terror attack when he lost his brother. “c ya laterrrr” was the last message he got from his brother. I found the game through an interview with the Guardian that talks about the games he is making. Another games that is less narration and more 8-bit graphics is the Loss Levels made with Pico-8.

As both games deal with the same event they make an interesting comparison of genres. I find the text adventure game much more effective for this subject as you feel the event unfold and the decisions give you a feeling for the experience.

Google AI experiment has you talking to books

Google has announced some cool text projects. See Google AI experiment has you talking to books. One of them, Talk to Books, lets you ask questions or type statements and get answers that are passages from books. This strikes me as a useful research tool as it allows you to see some (book) references that might be useful for defining an issue. The project is somewhat similar to the Veliza tool that we built into Voyant. Veliza is given a particular text and then uses an Eliza-like algorithm to answer you with passages from the text. Needless to say, Talking to Books is far more sophisticated and is not based simply on word searches. Veliza, on the other hand can be reprogrammed and you can specify the text to converse with.

Continue reading Google AI experiment has you talking to books

The Ethics of Datafiction


Information Wants to Be Free, Or Does It? The Ethics of Datafication has just come out in the Electronic Book Review. This article was written with Bettina Berendt at KU Leuven and is about thinking about the ethics of digitization. The article first looks at the cliche phrase “information wants to be free” and then moves on to survey a number of arguments why some things should be digitized.

The Aggregate IQ Files, Part One: How a Political Engineering Firm Exposed Their Code Base

The Research Director for UpGuard, Chris Vickery (@VickerySec) has uncovered code repositories from AggregateIQ, the Canadian company that was building tools for/with SCL and Cambridge Analytica. See The Aggregate IQ Files, Part One: How a Political Engineering Firm Exposed Their Code Base and AggregateIQ Created Cambridge Analytica’s Election Software, and Here’s the Proof from Gizmodo.

The screenshots from the repository show on project called ephemeral with a description “Because there is no such thing as THE TRUTH”. The “Primary Data Storage” of Ephemeral is called “Mamba Jamba”, presumably a joke on “mumbo jumbo” which isn’t a good sign. What is mort interesting is the description (see image above) of the data storage system as “The Database of Truth”. Here is a selection of that description:

The Database of Truth is a database system that integrates, obtains, and normalizes data from disparate sources including starting with the RNC data trust.  … This system will be created to make decisions based upon the data source and quality as to which data constitutes the accepted truth and connect via integrations or API to the source systems to acquire and update this data on a regular basis.

A robust front-end system will be built that allows an authrized user to query the Database of Truth to find data for a particular upcoming project, to see how current the data is, and to take a segment of that data and move it to the Escrow Database System. …

The Database of Truth is the Core source of data for the entire system. …

One wonders if there is a philosophical theory, of sorts, in Ephemeral. A theory where no truth is built on the mumbo jumbo of a database of truth(s).

Ephemeral would seem to be part of Project Ripon, the system that Cambridge Analytica never really delivered to the Cruz campaign. Perhaps the system was so ephemeral that it never worked and therefore the Database of Truth never held THE TRUTH. Ripon might be better called Ripoff.

After the Facebook scandal it’s time to base the digital economy on public v private ownership of data

In a nutshell, instead of letting Facebook get away with charging us for its services or continuing to exploit our data for advertising, we must find a way to get companies like Facebook to pay for accessing our data – conceptualised, for the most part, as something we own in common, not as something we own as individuals.

Evgeny Morozov has a great essay in The Guardian on how After the Facebook scandal it’s time to base the digital economy on public v private ownership of data. He argues that better data protection is not enough. We need to “to articulate a truly decentralised, emancipatory politics, whereby the institutions of the state (from the national to the municipal level) will be deployed to recognise, create, and foster the creation of social rights to data.” In Alberta that may start with a centralized clinical information system called Connect Care managed by the Province. The Province will presumably control access to our data to those researchers and health-care practitioners that commit to using access appropriately. Can we imagine a model where Connect Care is expanded to include social data that we can then control and give others (businesses) access to?

Research Team Security

One of the researchers in the GamerGate Reactions team has created a fabulous set of recommendations for team members doing dangerous research. See Security_Recommendations_2018_v2.0. This document brings together in one place a lot of information and links on how to secure your identity and research. The researcher put this together in support of a panel that I am chairing this afternoon on Risky Research that is part of a day of panels/workshops following the Edward Snowden talk yesterday evening. (You can see my blog entry on Snowden’s talk here.) The key topics covered include:

  • Basic Security Measures
  • Use End-to-End Encryption for Communications  Encrypt Your Computer
  • Destroy All Information
  • Secure Browsing
  • Encrypt all Web Traffic
  • Avoiding Attacks
  • On Preventing Doxing
  • Dealing with Harassment