Cybersyn: Before the Coup, Chile Tried to Find the Right Software for Socialism

Image of Cybersyn Opsroom

In New York for my last f2f meeting of the MLA Committee on Information Technology I got a New York Times with an intriguing article about a Chilean management system, Cybersyn, titled Before the Coup, Chile Tried to Find the Right Software for Socialism.

Cybersyn was born in July 1971 when Fernando Flores, then a 28-year-old government technocrat, sent a letter to Mr. Beer seeking his help in organizing Mr. Allende’s economy by applying cybernetic concepts. Mr. Beer was excited by the prospect of being able to test his ideas.

He wanted to use the telex communications system – a network of teletypewriters – to gather data from factories on variables like daily output, energy use and labor “in real time,” and then use a computer to filter out the important pieces of economic information the government needed to make decisions.

Cybersyn was apparently semi-functional before the coup that overthrew Allende’s government and it was used to help manage around the small-business and truckers strike in 1972. I don’t think the Opsroom pictured above was ever fully operational, but visualization screens were important even if at the time they were hand-drawn slides that were projected rather than computer generated visualizations (see http://varnelis.net/blog/kazys/project_cybersyn on the chairs of the Opsroom.) Beer and the Chileans wanted Cybersyn to help them implement an alternative socialist economy that was managed in real time rather than “free” and chaotic or planned in the heavy handed way of most socialist economies of the time.

Rooting around, I found a good article about Cybersyn and the English visionary designer Stafford Beer from 2003 in the Guardian by Andy Beckett, Santiago Dreaming. It turns out that Beer gave the Massey Lectures in 1971 and they have been reprinted by Anansi as Designing Freedom. He also moved part-time to Toronto in the 80s where his last partner, Dr. Allenna Leonard of Metaphorum still resides. He died in 2002.

Another interesting thread is Fernando Flores who was the political lead of Cybersyn and the person that recruited Beer for the project. After the coup, Flores went to the US and got a Ph.D. in Computer Science collaborating with Terry Winograd, and being influenced by Maturana, also Chilean. That’s right – the Flores of Understanding Computers and Cognition. He is now back in Chile as a senator and supports various projects there.

The common thread is that Beer, Flores and Maturana all seem interested in viable systems in different spheres. They were applying cybernetics.

Korea: Part-time Lecturers and Suicide

The Global Voices Online site has a story on Part-time Lecturers and Suicide that matters. A number of humanities lecturers have committed suicide after spending years in part-time sessional work with no promise of a professorship. Would we know if we had a similar situation here in Canada? Increasingly we are dependent on sessional teaching to cover courses as we handle budget cuts by not hiring tenure-track or even just contract faculty. My guess is that a few departments may get to 50% of their teaching being done by part-timers. Why is this? Sessionals, hired one course at a time, are a cheap way to get quality teaching, especially if the sessionals are led to believe they might eventually get the coveted positions. Full time faculty benefit because we can keep our research positions while letting help for a fraction of our salary. At what point should we be honest with ourselves and admit that a university cannot afford tenure track faculty for teaching and deal with the effects by creating teaching positions that have some stability instead of stringing on recent graduates. Is Korea ahead of us in confronting the desperation of part-time faculty? Will it take a suicide for anyone to notice here?

Harvard and Open Access

Peter Suber in Open Access News has reproduced the text of the motion that the Faculty of Arts and Science at Harvard passed requiring faculty to deposit a copy of their articles with the university.

The Faculty of Arts and Sciences of Harvard University is committed to disseminating the fruits of its research and scholarship as widely as possible. In keeping with that commitment, the Faculty adopts the following policy: Each Faculty member grants to the President and Fellows of Harvard College permission to make available his or her scholarly articles and to exercise the copyright in those articles.

According to another post by Peter Suber, Harvard is the first North American university to adopt an open access policy. He calls it a “permission mandate” (granting permission to the university to make research open) rather than a “deposit mandate.” It has the virtue that the university takes responsibility for maintaining the access, not the faculty member.

More on this can be found here (another Suber post) and here (Chronicle of Higher Ed.).

Zielinski: Deep Time of the Media

Image of Cover Siegried Zielinski’s Deep Time of the Media (translated by Gloria Custance, Cambridge, MA, MIT Press, c2006) is an unusual book that pokes into the lost histories of media technologies in order to start “toward an archaeology of hearing and seeing by technical means” (as the subtitle goes.) Zielinski starts by talking about the usual linear history of media technologies that recovers what predicts what we believe is important. This is the Vannevar Bush, Ted Nelson type of history. Zielinski looks away from the well known precursurs towards the magical and tries to recover those moments of diversity of technologies. (He writes about Gould’s idea of punctuated equilibrium as a model for media technologies – ie. that we have bursts of diversity and then periods of conformity.)

I’m interested in his idea of the magical, because I think it is important to the culture of computing. The magical for Zielinski is not a primitive precursor of science or efficiency. The magical is an attitude towards possibility that finds spectacle in technology. Zielinksi has a series of conclusions that sort of sketch out how to preserve the magical:

Developed media worlds need artistic, scientific, technical, and magical challenges.  (p. 255)

Cultivating dramaturgies of difference is an effective remedy against the increasing ergonomization of the technical media wolrds that is taking place under the banner of ostensible linear progress. (p. 259)

Establishing effective connections with the peripheries, without attempting to integrate these into the centers, can help to maintain the worlds of the media in a state that is open and transformable. (p. 261)

The most important precondition for guaranteeing the continued existence of relatively power-free spaces in media worlds is to refrain from all claims to occupying the center. (p. 269)

The problem with imagining media worlds that intervene, of analyzing and developing them creatively, is not so much finding an appropriate framework but rather allowing them to develop with and within time. (p. 270)

Kairos poetry in media worlds is potentially an efficacious tool against expropriation of the moment. (p. 272)

Artistic praxis in media worlds is a matter of extravagant expenditure. Ist priviledged location are not palaces but open laboratories. (p. 276)

Hitwise: Web Intelligence

On jill/text I cam across an interesting graph about OpenSocial vs. Facebook showing the difference in market share. Hitwise provides statistics and analysis of internet usage. They get their data from ISPs, which sounds like it could be a privacy issue. See their Product Features for the services they provide that most of us can’t afford. See what they say about how they gather information in How We Do It or here is quote from their press release on Hanah Montana Most Searched for Halloween Costume:

Since 1997, Hitwise has pioneered a unique, network-based approach to Internet measurement. Through relationships with ISPs around the world, Hitwise’s patented methodology anonymously captures the online usage, search and conversion behavior of 25 million Internet users. This unprecedented volume of Internet usage data is seamlessly integrated into an easy to use, web-based service, designed to help marketers better plan, implement and report on a range of online marketing programs.

They have blogs by their analysts, most of whom seem to be in the UK, that have interesting notes about trends like iTunes overtakes Free Music Downloads in Internet Searches.

As It Happens, Privacy, and the Mechanical Turk

As It Happens on CBC Radio just played a good double segment on “Google Eyes”. The first part looked at the Amazon Mechanical Turk task looking for Steve Fossett’s plane on satellite images. The second part looked at privacy issues around street level imaging from outfits like Google.

Mechanical Turk (Artificial Artificial Intelligence) is a project where people can contribute to tasks that need many human eyes like looking at thousands of satellite images for a missing plane. It reminds me of the SETI@home project which lets users install a screen saver that uses your unused processing cycles for SETI signal processing. SETI@home is not part of a generalized project, BOINC that, like the Mechanical Turk, has a process for people to post tasks for others to work on.

The Privacy Commissioner of Canada announced yesterday that she has written both Google and Immersive Media (who developed the Street View technology used by Google) “to seek further information and assurances that Canadians’ privacy rights will be safeguarded if their technology is deployed in Canada.” The issue is that,

While satellite photos, online maps and street level photography have found useful commercial and consumer applications, it remains important that individual privacy rights are considered and respected during the development and implementation of these new technologies.

This is a growing concern among privacy advocates as a number of companies have considered integrating street level photography in their online mapping technologies.

In street level photography the images are, in some cases, being captured using high-resolution video cameras affixed to vehicles as they proceed along city streets.

Google, according to the commission on the radio, has not replied to the August 9th letter.

Where is the Semantic Web?

Semantic Web DiagramWhere is the Semantic Web? In the face of Web 2.0 hype, the semantic web meme seems to be struggling. Tim Berners-Lee, in the slides from a 2003 talk says there is “no such thing” as a killer-app for the semantic web, that “its the integration, stupid!” (slide 7 of 35.) The problem is that mashups are giving users usable integration now. The difference is that mashups are usually based around one large content portal like Flickr that then little sattelite tools feed off. The semantic web was a much more democratic idea of integration.

Google’s Peter Norvig is quoted in Google exec challenges Berners-Lee saying that there are three problems with the semantic web:

  • Incompetence: users don’t know how to use HTML in a standard way let alone RDF.
  • Competition: companies that are in a leadership position don’t like to use open standards that could benefit others, they like to control the standards to their advantage.
  • Trust: too many people try to trick systems to change the visibility of their pages (selling Viagra.)

In a 2006 Guardian report, Spread the word, and join it up, SA Mathieson quotes Berners-Lee to the effect that they (semantic web folk) haven’t shown useful stuff. The web of TBL was a case of less is more (compared to SGML and other hypertext systems), the semantic web may lose out to all the creative mashups that are less standardized and more useful.

IDC White Paper: The Digital Universe

Image of Report CoverIn an earlier blog I mentioned the IDC report, The Digital Universe, about the explosion of digital information. It was commissioned by EMC Corporation and is available free on their site, here. They also have a page on related information which includes a link to “Are You an Informationist?” and “The Inforati Files”.

The PDF of the IDC White Paper includes some interesting points:

  • Between 2006 and 2010, the information added annually to the digital universe will increase more than six fold from 161 exabytes to 988 exabytes.
  • Three major analog to digital conversions are powering this growth ‚Äì film to digital image capture, analog to digital voice, and analog to digital TV.
  • Images, captured by more than 1 billion devices in the world, from digital cameras and camera phones to medical scanners and security cameras, comprise the largest component of the digital universe. They are replicated over the Internet, on
    private organizational networks, by PCs and servers, in data centers, in digital TV broadcasts, and on digital projection movie screens. building automation and security migrates to IP networks, surveillance goes digital, and RFID and sensor networks
    proliferate.

Is it time to rewrite “The Work of Art in the Age of Mechanical Reproduction” to think about about “The Image in the Age of Networked Distribution”.