Webs and whirligigs: Marshall McLuhan in his time and ours

McLuhan and Woody Allen from Annie Hall

Today is the 100th anniversary of Marshall McLuhan’s birth so there are a bunch of articles about his work including this one from the Nieman Journalism Lab by Megan Garber, Webs and whirligigs: Marshall McLuhan in his time and ours. I also found an article by Paul Miller aka DJ Spooky on Dead Simple: Marshall Mcluhan and the Art of the Record which is partly about the Medium is the Massage record that McLuhan worked on with others. Right at the top you can listen to a DJ Spooky remix of McLuhan from the record.

Some students here at U of A and I have been working our way through the archives of the Globe and Mail studying how computing was presented to Canadians starting with the first articles in the 1950s. McLuhan features in a number of articles as he was eminently quotable and he was getting research funding. The best article is from May 7, 1964 (page 7) by Hugh Munro titled “Research Project with Awesome Implications.” Here are some quotes:

If successful, they said, it (the project) could produce a foolproof system for analyzing humans and manipulating their behavior, or it could give mankind a surefire method of planning the future and making a world free from large-scale social mistakes. …

They (the team of nine scientists) have undertaken to discover the impacts of culture and technology on each other, or, as Dr McLuhan put it, to discover “how the things we make change the way we live and how the way we live changes the things we make.” …

The next stage in the technological revolution that will change man’s perceptions is the computer. But it may hold the secret to the communications problem. With these electronic devices, it is possible to test all manner of things from ads to cities.

The article describes a grant (probably Canada Council but perhaps a foundation grant) that an interdisciplinary team of nine “scientists” from medicine, architecture, engineering, political science, psychiatry, museology, anthropology and English. They were going to use computers and head cameras (that track what people look at) to understand what people sense, how they are stimulated and how what they sense is conditioned by their background. “The scientists at the Centre (of Culture and Technology at U of T) believe they can define and catalogue the sensory characteristics …”

The idea is that if they can figure out how people are stimulated then they can figure out how to manipulate them either for good or bad. “Foolproof ads could be designed. ‘Madison Avenue could rule the world.’ Dr. McLuhan said. ‘The IQs of illiterate people could be raised dramatically by new educational methods.'”

Oh to be so confident about research outcomes!

Inside Facebook: Available Data Shows Facebook User Numbers Growing Quickly, or Slowly, or Falling

According to Inside Facebook available data shows Facebook user numbers possibly flattening in early-adopter countries like the Canada, UK and the US. This article follows on an article Facebook Sees Big Traffic Drops in US and Canada as It Nears 700 Million Users Worldwide that got a fair amount of press attention. What is going on? In the article about available data they say,

there do appear to be some overriding trends here. Canada, the United Kingdom and a few other early adopting countries have alternately shown gains and losses starting in 2010. Up until then, growth had generally been much steadier.

I doubt this means that Facebook is about disappear. It is still growing world wide. They may just be hitting a saturation point – something you would expect. We might ask if or how Facebook will change once its user base is not expanding. Are they dependent on a perception of growth and will they suffer once Facebook is no longer the hot growing thing? Will users migrate their social networking to the next big thing?

I would add a general reflection which is that there are now more social media sites than I can keep up with. There isn’t enough time to blog, Facebook, Twitter, Flickr and so on. We now have choose that social media that suit our changing lives and where our friends are. My academic friends have migrated to Twitter (while I’m still stuck blogging.) Facebook is what my mother likes. The trick is to not feel one has to keep up with it all.

Learning to Love the (Shallow, Divisive, Unreliable) New Media

James Fallows has written a good article in the latest The Atlantic on Learning to Love the (Shallow, Divisive, Unreliable) New Media. He sets up the standard argument that the old ways of consuming news from a small number of serious outlets brought us together and ensured that there was serious news. Now that everyone can choose their own news things have changed. Fallows talks with the editor of Gawker about what works on the Internet. Gawker gives its audience what they want, not what serious people think they want. Fallows ends up by making his peace with how things have changed hoping that new forms will evolve.

But perhaps this apparently late stage is actually an early stage, in the collective drive and willingness to devise new means of explaining the world and in the individual ability to investigate, weigh, and interpret the ever richer supply of information available to us. Recall the uprisings in Iran and Egypt. Recall the response to the tsunami in Indonesia and the earthquake in Haiti. My understanding of technological and political history makes me think it is still early. Also, there is no point in thinking anything else.

Beware Social Media’s Surprising Dark Side, Scholars Warn CEO’s

Jeffrey R. Young has an article in the Technology section of The Chronicle of Higher Education enjoining us to Beware Social Media’s Surprising Dark Side, Scholars Warn CEO’s (March 20, 2011). The article is about a South by Southwest Interactive conference that brought together researchers and industry.

One of the big trends is using crowdsourcing or micropayments to get work done for free or very little. Jonathan Zittrain, a Harvard law professor warned that this could be exploitative.

Mr. Zittrain began his argument against crowdsourcing with the story of the Mechanical Turk, a machine in the 18th century that was said to play chess as well as a human. But the contraption was a showy fraud; a man hidden inside moved the arms of a turban-wearing mannequin. Amazon, the online shopping giant, now offers a crowdsourcing service it calls Mechanical Turk, which lets anyone, for a fee, commission unseen hands to work on tasks like proofreading documents or identifying artists in musical recordings.

The similarity of crowdsourcing to a man shoved inside a box means the practice isn’t exactly worker-friendly, the professor argued. “In fact, it’s an actual digital sweatshop,” he said of the many sites that use the approach.

Fees paid for crowdsourced tasks are usually so meager that they could not possibly earn participants a living wage, Mr. Zittrain argued. He is familiar with one group drawn to the services: poor graduate students seeking spending money.

I wonder if anyone has proposed a code of ethics for crowdsourcing? Thanks to Megan for sending this to me.

Lancashire: Literary Alzheimer’s

In the category of things I meant to blog some time ago is Ian Lancashire and Graeme Hirst’s research into Agatha Christie’s Alzheimer’s-related dementia which was written up by the New York Times in their list of notable ideas for 2009. The write up is by Amanda Fortini, see Literary Alzheimer’s – The Ninth Annual Year in Ideas – Magazine. There is a longer article about this research by Judy Stoffman in the Insight section of the Toronto Star, An Agatha Christie mystery: Is Alzheimer’s on the page? (Jan. 23, 2010)

Lancashire’s specialty is the esoteric field of neuro-cognitive literary theory – in his words “what science says about the creative process versus what authors report about how they create their books.” He started to apply computer analysis to literary texts in 1982.

Ian Lancashire has links to the poster that first got attention and to a paper on his home page. He has also just published a book, Forgetful Muses; Reading the Author in the Text that develops his neuro-cognitive literary theory.

NYT: Armies of Expensive Lawyers, Replaced by Cheaper Software

The New York Times has an article about commercial text analysis systems by John Markoff, Armies of Expensive Lawyers, Replaced by Cheaper Software (March 5, 2011, A1 in New York Edition; March 4 online). He describes how companies are building systems that can analyze the immense amounts of documents shared in lawsuits. Traditionally an army of people would comb through the documents, “Now, thanks to advances in artificial intelligence, “e-discovery” software can analyze docuemnts in a fraction of the time for a fraction of the cost.”

Some programs go beyond just finding documents with relevant terms at computer speeds. They can extract relevant concepts — like documents relevant to social protest in the Middle East — even in the absence of specific terms, and deduce patterns of behavior that would have eluded lawyers examining millions of documents.

There is a nice graphic to accompany the article here. Markoff mentions companies like Blackstone Discovery and Cataphora. He also argues that the availability of a large email archive from Enron has made it possible for teams to experiment on a real dataset.

Addicted to Games?

Today I came across stories about game addiction. One is from the BBC that had an episode of Panorama titled Addicted to Games?. The web page has video clips and articles like Can video gaming cross from innocent fun to addiction? by Raphael Rowe. (BTW the web page will expire in 11 months – I guess the BBC pulls pages after a year.) Edge has an article reviewing this episode, Was Panorama’s Game Addiction Report Fair? where they conclude:

Videogames are a powerful form of entertainment. Last night’s Panorama report acknowledged this, and – despite an anxiously concerned tone throughout – also acknowledged that the vast majority of gamers have nothing to fear from their hobby. But beyond a superficial look at basic game mechanics, the report made little attempt to find out why, for the unlucky and unfortunate young men it interviewed, gaming had become such an all-encompassing force in their lives.

From the Los Angeles Times is an article about Video game addiction: Researchers identify risk factors which reports on a study just released by Pediatrics (Jan. 17 issue) that says,

“Greater amounts of gaming, lower social competence, and greater impulsivity seemed to act as risk factors for becoming pathological gamers, whereas depression, anxiety, social phobias, and lower school performance seemed to act as outcomes of pathological gaming.” (quote from original study in LAT article)

Edge again has an article ESA Refutes Pediatrics Videogame Studies commenting briefly on the research.

The CBC has an article by Amina Zafar on Video game addiction: Does it exist? which is longer, thoughtful, and has lots of useful links down the side. It is part of a “special video games feature package”, Pushing Buttons.

NY Times: Humanities Scholars Embrace Digital Technology

The next big idea is data according to a New York Times article, Humanities Scholars Embrace Digital Technology by Patricia Cohen (November 16, 2010.) The article reports on some of the big data interpretation projects like those funded by the Digging Into Data program like the Mining with Criminal Intent project I am on.

Members of a new generation of digitally savvy humanists argue it is time to stop looking for inspiration in the next political or philosophical “ism” and start exploring how technology is changing our understanding of the liberal arts. This latest frontier is about method, they say, using powerful technologies and vast stores of digitized materials that previous humanities scholars did not have.

I’m not sure this is a new generation as we have been at this for a while, but perhaps the point is that the new generation is now looking away from theory towards the large-scale data issues.

What stands out about the projects mentioned and others is that the digital humanities and design fields are developing new and subtler forms of large-scale data mining and interpretation that use methods from other disciplines along with a sensitivity to the nature of the data and the questions we want to ask. The image above comes from Stanford’s Visualization of Republic of Letters project. There is nothing new about visualization or network analysis, but digital humanists are trying to adapt methods to messy human data – in other words interpreting the really interesting stuff so that it makes sense of something to someone.

Perhaps we may be able to show that following theses are true and important to the broader community:

  • Interesting data has to be interpreted to be interesting. Someone has to pose the questions that make data useful.
  • There is too much of data and it is messy; therefore it can’t by interpreted automatically. Real world analysis always involves questions, choices, data curation, mixing techniques, and iterative interpretation of results to generate knowledge.
  • Interesting data always has to be explained to someone in some context. Results are only useful knowledge if they are published in some fashion that makes them accessible to an intended audience.
  • Humanists have been the curators and interpreters of information which is why the subtle skills of questioning, curating, editing, analyzing, interpreting and representing are all the more needed now. Without humanists (and I include librarians and archivists in this category) who are comfortable with digital data and methods we will have only too much data and too many unused tools.

Thanks to Judith for pointing me to this NYT article.