Last week I was interviewed by Judy Aldous on the CBC programme alberta@noon Monday June 10, 2013. We took calls about social media. I was intrigued by the range of reactions from “I don’t need anything other than messaging” to “I use it all the time for my company.” One point I was trying to make is that we all have to now manage our social media presence. There are too many venues to be present in all of them and, as my colleague Julie Rak points out, we are now all celebrities in the sense that we have to worry about how we appear in media. That means we need to educate ourselves to some degree and experiment with developing a voice.
Archive for the ‘Internet Culture and Technology’ Category
Tomorrow we are organizing an Around the World Symposium on Digital Culture. This symposium brings together scholars from different countries talking about digital culture for about 17-20 hours as it goes from place to place streaming their talks and discussions. The Symposium is being organized by the Kule Institute for Advanced Study here at the University of Alberta. Visit the site to see the speakers and to tune in.
Please join in using the Twitter hashtag #UofAworld
Sam sent me a great and careful article about MOOCs,The MOOC Moment and the End of Reform. The article is a longer version of a paper given by Aaron Bady at UC Irvine as part of a panel on MOOCs and For Profit Universities. In his longer paper Bady makes a number of points:
- We need to look closely at the rhetoric that is spinning this a “moment” of something new. Bady questions the sense of time and timing to the hype. What is really new? Why is this the moment?
- There isn’t much new to MOOCs except that prestige universities are finally trying online education (which others have been trying since the 1980s) and branding their projects. MOOCs represent Harvard trying to catch up with the University of Phoenix by pretending they have leapfrogged decades of innovation.
- The term MOOC was coined by in the context of an online course at the U of Manitoba. See the Wikipedia article on MOOCs. The Manitoba experiment, however was quite different. “[T]he goal of these original MOOCs was to foster an educational process that was something totally different: it would be as exploratory and creative as its participants chose to make it, it was about building a sense of community investment in a particular project, a fundamentally socially-driven enterprise, and its outcomes were to be fluid and open-ended.”
- MOOCs are speculative bubble that will burst. The question is what will things look like when it does?
- MOOCs are not necessarily open as many are being put on by for-profit companies. Perhaps they could be called MOCks.
- The economics of MOOCs need to be watched. They look a lot like other dot com businesses.
- MOOCs are the end of the change that happens when learning is in dialogue not the beginning of change. MOOCs could freeze innovation as they take so many resources to develop by so few.
Here is a quote:
If I have one overarching takeaway point in this talk, it’s this: there’s almost nothing new about the kind of online education that the word MOOC now describes. It’s been given a great deal of hype and publicity, but that aura of “innovation” poorly describes a technology—or set of technological practices, to be more precise—that is not that distinct from the longer story of online education, and which is designed to reinforce and re-establish the status quo, to make tenable a structure that is falling apart.
The New Yorker last month had a great story by Larissa MacFarquhar on The Tragedy of Aaron Swartz. The net is full of opinions and outrage about the Swartz affair, MacFarquhar gives us a human dimension and a complex web of quotes from others. Another New Yorker story by Tim Wu, Fixing the Worst Law in Technology explains the law that prosecutors used against Swartz,
The Computer Fraud and Abuse Act is the most outrageous criminal law you’ve never heard of. It bans “unauthorized access” of computers, but no one really knows what those words mean.
I must admit, my first thought on reading about this case, was that I would love to have all of JSTOR, though I’m not sure what I would do with it. I think there is a closet collector in every academic who wants a copy of everything they might need to consult late at night.
Robert Darnton has written an essay about the launch of the Digital Public Library of America that everyone should read. A great writer and a historian he provides a historical context and a contemporary context. He quotes from the original mission statement to show the ambition,
“an open, distributed network of comprehensive online resources that would draw on the nation’s living heritage from libraries, universities, archives, and museums in order to educate, inform, and empower everyone in the current and future generations.”
The essay, The National Digital Public Library Is Launched! by Robert Darnton is in the New York Review of Books. A lot of it talks about what Harvard is contributing (Darnton is the University Librarian there), which is OK as it is good to see leadership.
He also mentions that Daniel Cohen is the new executive director. Bravo! Great choice!
Speigel Online has an interesting story about how a Hacker Measures the Internet Illegally with Carna Botnet. The anonymous hacker(s) exploited unprotected devices online to create a botnet with which they then used to take a census of those online.
So what were the actual results of the Internet census? How many IP addresses were there in 2012? “That depends on how you count,” the hacker writes. Some 450 million were “in use and reachable” during his scans. Then there were the firewalled IPs and those with reverse DNS records (which means there are domain names associated with them). In total, this equalled some 1.3 billion IP addresses in use.
A week or so ago I began to follow Mark Sample’s tweets carefully as his tweeted what at first sounded like a nightmare at Dulles when he went to catch a flight. As he tweeted through the days it became more surreal. It seemed he was sequestered and being interrogated. He reported shots and deaths. It was hard to tell what was going on and then it was all over with a link to a YouTube video of him whispering into the phone. Then when you clicked on @samplereality you got “Internal Server Error” and if you tried to find his page you got a “Sorry, that page doesn’t exist!”
Someone had deleted his account.
Fortunately there was an off-site archive of his tweets that he had backed up here. And, as a useful hint, there was an entry on ProfHacker by Sample on how to Keep Your Official Twitter Archive Fresh which the editors introduced with,
this is a draft that Mark Sample uploaded to Profhacker last week. We have been unable to contact Mark for the final revisions, so we are posting it as-is. Our apologies for any errors.
It seemed more and more likely that the dramatic events in Dulles were a work of net fiction or an alternate reality game, something Mark is interested in and claimed to be working on for 2013 in his blog entry From Fish to Print: My 2012 in Review. I was also suspicious that none of his colleagues at George Mason seemed to be that worked up about his experience. And that’s the fun of this sort of alternate reality fiction. You don’t really know if you’re being taken or not and so you start poking around. Like many I was initially sympathetic (who hasn’t been inconvenienced by delays) and then worried. Eventually I decided the stream of posts were a work of fiction, but of course I’m still not sure. When alternate reality fiction is done well you never know whether This Is Not A Game.
I still don’t, but I’ll risk a guess and congratulate Mark … Bravo! If I’m wrong I apologize.
I am seeing more and more articles in the media about text analysis and the digital humanities. Ryan Cordell used the platform of the amazing story of his children getting millions of FaceBook likes to get a puppy to discuss the digital humanities and he studies how ideas could go viral before the internet. (See the CBC Q podcast of his interview.)
From Humanist I found a New York Times article by Steve Lohr on Literary History, Seen Through Big Data’s Lens. The story talks about Matt Jockers’ forthcoming work on Macroanalysis: Digital Methods and Literary History (University of Illinois Press). Matt is quoted saying,
Traditionally, literary history was done by studying a relative handful of texts, … What this technology does is let you see the big picture — the context in which a writer worked — on a scale we’ve never seen before.
In today’s Edmonton Journal I came across a story by Misty Harris on If Romeo and Juliet had cellphones: Study views the mobile revolution through a Shakespearean lens. This story reports on a paper by Barry Wellman that uses Romeo and Juliet as a way to think about how mobile media (text messaging especially) have changed how we interact. In Shakespeare’s time you interacted with others through groups (like your family in Verona). Now individuals can have distributed networks of individual friends that don’t have to go through any gatekeepers.
The Guardian has a story by John Burn-Murdoch on how Study: less than 1% of the world’s data is analysed, over 80% is unprotected.
This Guardian article reports on a Digital Universe Study that reports that the “global data supply reached 2.8 zettabytes (ZB) in 2012″ and that “just 0.5% of this is used for analysis”. The industry study emphasizes that the promise of “Big Data” is in its analysis,
First, while the portion of the digital universe holding potential analytic value is growing, only a tiny fraction of territory has been explored. IDC estimates that by 2020, as much as 33% of the digital universe will contain information that might be valuable if analyzed, compared with 25% today. This untapped value could be found in patterns in social media usage, correlations in scientific data from discrete studies, medical information intersected with sociological data, faces in security footage, and so on. However, even with a generous estimate, the amount of information in the digital universe that is “tagged” accounts for only about 3% of the digital universe in 2012, and that which is analyzed is half a percent of the digital universe. Herein is the promise of “Big Data” technology — the extraction of value from the large untapped pools of data in the digital universe. (p. 3)
I can’t help wondering if industry studies aren’t trying to stampede us to thinking that there is lots of money to be made in analytics. These studies often seem to come from the entities that benefit from investment into analytics. What if the value of Big Data turns out to be in getting people to buy into analytical tools and services (or be left behind.) Has there been any critical analysis (as opposed to anecdotal evidence) of whether analytics really do warrant the effort? A good article I came across on the need for analytical criticism is Trevor Butterworth’s Goodbye Anecdotes! The Age of Big Data Demands Real Criticsm. He starts with,
Every day, we produce 2.5 exabytes of information, the analysis of which will, supposedly, make us healthier, wiser, and above all, wealthier—although it’s all a bit fuzzy as to what, exactly, we’re supposed to do with 2.5 exabytes of data—or how we’re supposed to do whatever it is that we’re supposed to do with it, given that Big Data requires a lot more than a shiny MacBook Pro to run any kind of analysis.
Of course the Digital Universe Study is not only about the opportunities for analytics. It also points out:
- That data security is going to become more and more of a problem
- That more and more data is coming from emerging markets
- That we could get a lot more useful analysis done if there was more metadata (tagging), especially at the source. They are calling for more intelligence in the gathering devices – the surveillance cameras, for example. They could add metadata at the point of capture like time, place, and then stuff like whether there are faces.
- That the promising types of data that could generate value start with surveillance and medical data.
Reading about Big Data I also begin to wonder what it is. Fortunately IDC (who are behind the Digital Universe Study have a definition,
Last year, Big Data became a big topic across nearly every area of IT. IDC defines Big Data technologies as a new generation of technologies and architectures, designed to economically extract value from very large volumes of a wide variety of data by enabling high-velocity capture, discovery, and/or analysis. There are three main characteristics of Big Data: the data itself, the analytics of the data, and the presentation of the results of the analytics. Then there are the products and services that can be wrapped around one or all of these Big Data elements. (p. 9)
Big Data is not really about data at all. It is about technologies and services. It is about the opportunity that comes with “a big topic across nearly every area of IT.” Big Data is more like Big Buzz. Now we know what follows Web 2.0 (and it was never going to be Web 3.0.)
For a more academic and interesting perspective on Big Data I recommend (following Butterworth) Martin Hilbert’s “How much information is there in the ‘information society’?” (Significance, 9:4, 8-12, 2012.) One of the more interesting points he makes is the growing importance of text,
Despite the general percep- tion that the digital age is synonymous with the proliferation of media-rich audio and videos, we find that text and still images cap- ture a larger share of the world’s technological memories than they did before4. In the early 1990s, video represented more than 80% of the world’s information stock (mainly stored in analogue VHS cassettes) and audio almost 15% (on audio cassettes and vinyl records). By 2007, the share of video in the world’s storage devices had decreased to 60% and the share of audio to merely 5%, while text increased from less than 1% to a staggering 20% (boosted by the vast amounts of alphanumerical content on internet servers, hard disks and databases.) The multimedia age actually turns out to be an alphanumeric text age, which is good news if you want to make life easy for search engines. (p. 9)
One of the points that Hilbert makes that would support the importance of analytics is that our capacity to store data is catching up with the amount of data broadcast and communicated. In other words we are getting closer to being able to be able store most of what is broadcast and communicated. Even more dramatic is the growth in computation. In short available computation is growing faster than storage and storage faster than transmission. With excess comes experimentation and with excess computation and storage, why not experiment with what is communicated. We are, after all, all humanists who are interested primarily ourselves. The opportunity to study ourselves in real time is too tempting to give up. There may be little commercial value in the Big Reflection, but that doesn’t mean it isn’t the Big Temptation. The Delphic oracle told us to Know Thyself and now we can in a new new way. Perhaps it would be more accurate to say that the value in Big Data is in our narcissism. The services that will do well are those that feed our Big Desire to know more and more (recently) ourselves both individually and collectively. Privacy will be trumped by the desire for analytic celebrity where you become you own spectacle.
This could be good news for the humanities. I’m tempted to announce that this will be the century of the BIG BIG HUMAN. With Big Reflection we will turn on ourselves and consume more and more about ourselves. The humanities could claim that we are the disciplines that reflect on the human and analytics are just another practice for doing so, but to do so we might have to look at what is written in us or start writing in DNA.
In 2007, the DNA in the 60 trillion cells of one single human body would have stored more information than all of our technological devices together. (Hilbert, p. 11)
I’ve just posted my MLA 2013 convention notes on philosophi.ca (my wiki). I participated in a workshop on getting started with DH organized by DHCommons, gave a paper on “thinking through theoretical things”, and participated in a panel on “Open Sesame” (interoperability for literary study.)
The sessions seemed full, even the theory one which started at 7pm! (MLA folk are serious about theorizing.)
At the convention the MLA announced and promoted a new digital MLA Commons. I’ve been poking around and trying to figure out what it will become. They say it is “a developing network linking members of the Modern Language Association.” I’m not sure I need one more venue to link to people, but it could prove an important forum if promoted.