The bad things that happen when algorithms run online shops

Smart software controls the prices and products you see when you shop online – and sometimes it can go spectacularly wrong, discovers Chris Baraniuk.

The BBC has a stroy about The bad things that happen when algorithms run online shops. The story describes how e-commerce systems designed to set prices dynamically (in comparison with someone else’s price, for example) can go wrong and end up charging customers much more than they will pay or charging them virtually nothing so the store loses money.

The story links to an instructive blog entry by Michael Eisen about how two algorithms pushed up the price on a book into the millions, Amazon’s $23,698,655.93 book about flies. The blog entry is a perfect little story about about the problems you get when you have algorithms responding iteratively to each other without any sanity checks.

What coding really teaches children

You’ve seen movies where programmers pound out torrents of code? That is nothing like reality. Most of the time, coders don’t type at all; they sit and stare morosely at the screen, running their hands through their hair, trying to spot what they’ve done wrong. It can take hours, days, or even weeks. But once the bug is fixed and the program starts working again, the burst of pleasure has a narcotic effect.

Stéfan pointed me to a nice opinion piece about programming education in the Globe titled, Opinion: What coding really teaches children. Clive Thompson that teaching programming in elementary school will not necessarily teach math but it can teach kids about the digital world and teach them the persistence it takes to get complex things working. He also worries, as I do, about asking elementary teachers to learn enough coding to be able to teach it. This could be a recipe for alienating a lot of students who are taught by teachers who haven’t learned.

The Last One

Whatever happened to The Last One software? The Last One (TLO) was a “program generator” that was supposed to take input from a user who wasn’t a programmer and be able to generate a BASIC program.

TLO was developed by a company called D.J. “AI” Systems Ltd. that was set up by David James who became interested in artificial intelligence when he bought a computer for his business, and apparently got so distracted that he was bankrupted by that interest (and lost his computers). It was funded by an equally colourful character, Scotty Bambury who made his money as a tire dealer in Somerset. (See here and here.)

Personal Computer magazine cover from here

The name (The Last One) refers to the expectation that this would be the last software you would need to buy. As the cover image above shows, they were imagining programmers being put out of work by an AI that could reprogram itself. TLO would be the last software you had to buy and possibly the first AI capable of recursively improving itself. DJ AI could have been spinning up the seed AI that could lead to the singularity! 

Here is some of the text from an ad for TLO. The text ran under the spacey headline at the top of this post.

The first program you should buy. …

THE LAST ONE … The program that writes programs!

Now, for the first time, your computer is truly ‘personal’. Now, simply and easily, you can create software the way you want it. …

Yet another sense of “personal” in “personal computer” – a computer where all your software (except, of course, TLO) is personally developed. Imagine a computer that you trained to do what you needed. This was the situation with early mainframes – programmers had to develop the applications individually for each system, they just didn’t have TLO.

The tech ‘solutions’ for coronavirus take the surveillance state to the next level

Neoliberalism shrinks public budgets; solutionism shrinks public imagination.

Evgeny Morozov has crisp essay in The Guardina on how The tech ‘solutions’ for coronavirus take the surveillance state to the next level. He argues that neoliberalist austerity cut our public services back in ways that now we see are endangering lives, but it is solutionism that constraining our ideas about what we can do to deal with situations. If we look for a technical solution we give up on questioning the underlying defunding of the commons.

There is nice interview between Natasha Dow Shüll Morozov on The Folly of Technological Solutionism: An Interview with Evgeny Morozov in which they talk about his book To Save Everything, Click Here: The Folly of Technological Solutionism and gamification.

Back in The Guardian, he ends his essay warning that we should focus on picking between apps – between solutions. We should get beyond solutions like apps to thinking politically.

The feast of solutionism unleashed by Covid-19 reveals the extreme dependence of the actually existing democracies on the undemocratic exercise of private power by technology platforms. Our first order of business should be to chart a post-solutionist path – one that gives the public sovereignty over digital platforms.

Welcome to Dialogica: Thinking-Through Voyant!

Do you need online teaching ideas and materials? Dialogica was supposed to be a text book, but instead we are adapting it for use in online learning and self-study. It is shared here under a CC BY 4.0 license so you can adapt as needed.

Stéfan Sinclair and I have put up a web site with tutorial materials for learning Voyant. See Dialogi.ca: Thinking-Through Voyant!.

Dialogica (http://dialogi.ca) plays with the idea of learning through a dialogue. A dialogue with the text; a dialogue mediated by the tool; and a dialogue with instructors like us.

Dialogica is made up of a set of tutorials that students should be able to alone or with minimal support. These are Word documents that you (instructors) can edit to suit your teaching and we are adding to them. We have added a gloss of teaching notes. Later we plan to add Spyral notebooks that go into greater detail on technical subjects, including how to program in Spyral.

Dialogica is made available with a CC BY 4.0 license so you can do what you want with it as long as you give us some sort of credit.

Show and Tell at CHRIN


Stéphane Pouyllau’s photo of me presenting

Michael Sinatra invited me to a “show and tell” workshop at the new Université de Montréal campus where they have a long data wall. Sinatra is the Director of CRIHN (Centre de recherche interuniversitaire sur les humanitiés numériques) and kindly invited me to show what I am doing with Stéfan Sinclair and to see what others at CRIHN and in France are doing.

Continue reading Show and Tell at CHRIN

The End of Agile

I knew the end of Agile was coming when we started using hockey sticks.

From Slashdot I found my way to a good essay on The End of Agile by Kurt Cagle in Forbes.

The Agile Manifesto, like most such screeds, started out as a really good idea. The core principle was simple – you didn’t really need large groups of people working on software projects to get them done. If anything, beyond a certain point extra people just added to the communication impedance and slowed a project down. Many open source projects that did really cool things were done by small development teams of between a couple and twelve people, with the ideal size being about seven.

Cagle points out that certain types of enterprise projects don’t lend themselves to agile development. In a follow up article he provides links to rebuttals and supporting articles including one on Agile and Toxic Masculinity (it turns out there are a lot of sporting/speed talk in agile.) He proposes the Studio model as an alternative and this model is based on how creative works like movies and games get made. There is an emphasis on creative direction and vision.

I wonder how this critique of agile could be adapted to critique agile-inspired management techniques?

 

 

$432 000 painting “by AI” sold at Christie’s

A painting created using GANs (generative adversarial networks) sold for $432 000 at Christies today.

Last year a $432 000 painting “by AI” sold at Christie’s. The painting was created by a collective called Obvious. They used a Generative Adversarial Network. In an essay titled, A naive yet educated perspective on Art and Artificial Intelligence, they talk about how they created the work.

Generative Adversarial Networks (GANs) analyze tens of thousands of images, learn from their features, and are trained with the aim to create new images that are undistinguishable from the original data source.

They also point out that many of the same concerns people have about AI art today were voiced about photography in the 19th century. Photography automated the image making business much as AIs are automating other tasks.

Can we use these GANs for other generative scholarship?

AI Weirdness

I just came across a neat site called AI Weirdness. The site describes all sorts of “weird” experiments in learning neural networks. Some examples:

The site has a nice FAQ that describes her tools and how to learn how to do it.

Franken-algorithms: the deadly consequences of unpredictable code

The death of a woman hit by a self-driving car highlights an unfolding technological crisis, as code piled on code creates ‘a universe no one fully understands’

The Guardian has a good essay by Andrew Smith about Franken-algorithms: the deadly consequences of unpredictable code. The essay starts with the obvious problems of biased algorithms like those documented by Cathy O’Neil in Weapons of Math Destruction. It then goes further to talk about cases where algorithms are learning on the fly or are so complex that their behaviour becomes unpredictable. An example is high-frequency trading algorithms that trade on the stock market. These algorithmic traders try to outwit each other and learn which leads to unpredictable “flash crashes” when they go rogue.

The problem, he (George Dyson) tells me, is that we’re building systems that are beyond our intellectual means to control. We believe that if a system is deterministic (acting according to fixed rules, this being the definition of an algorithm) it is predictable – and that what is predictable can be controlled. Both assumptions turn out to be wrong.

The good news is that, according to one of the experts consulted this could lead to “a golden age for philosophy” as we try to sort out the ethics of these autonomous systems.