New ‘Digital Divide’ Seen in Wasting Time Online

From @nowviskie a New York Times article on the New ‘Digital Divide’ Seen in Wasting Time Online.

As access to devices has spread, children in poorer families are spending considerably more time than children from more well-off families using their television and gadgets to watch shows and videos, play games and connect on social networking sites, studies show.

This fits in interesting ways with research I’ve come across in two other contexts. First, it fits with what Valerie Steeves talked about at the GRAND 2012 conference I went to. (See my conference notes.) She reported on her Young Canadians in an Online World research – she has been interviewing young Canadians, their parents and teachers over the years. Between 2000 and now there has been a shift in attitude towards the internet from believing it was good for learning to thinking of it as a minefield.

The other context is a cool book I’m reading on keitai or mobile phones in Japan. Personal, Portable, Pedestrian is a collection edited by Mizuko Ito, Daisuke Okabe and Misa Matsuda about the cell phone phenomenon in Japan. They point out in passing how there are significant national/cultural differences in how technologies are picked up and used.

In the case of the PC Internet, differences in adoption were most often couched in terms of a digital divide, of haves and have-nots in relation to a universally desirable technological resource. By contrast, mobile media are frequently characterized as having different attractions depending on local contexts and cultures. The discourse of the digital divide has been mobilized in relation to Japanese keitai Internet access (see chapter 1) and is implicit in the discourse suggesting that the United States needs to catch up to Japanese keitai cultures. (p. 6)

While we need to be aware of differences in access to technology, we also should be critical of the assumptions underlying the discourse of divides. Why do we assume that the Internet is good and mobiles less so? Why did the Japanese discourse switch from viewing keitai as promoting youth rudeness and isolation to arguing for Japanese technonationalist exceptionalism (we use mobiles more because there is something exceptional about Japanese culture/spirit.)

Which reminds me of a TechCrunch article on How The Future of Mobile Lies in the Developing World. Cell phones for us are one more gadget with which to access the Internet. In the developing world they are revolutionary in that they leapfrogged the problems of physical infrastructure (phone wires) and now provide connectivity for many who had none. It is no wonder that the growth in the cell market is in the developing world.

For many communities, simple voice and text connections have brought about revolutions in access to financial, health, agricultural and education services and opportunities for employment.  For example, many farmers in rural areas in Africa and Asia use SMS services to to find out the daily prices of prices of agricultural commodities. This information allows them to improve their bargaining position when taking their goods to market, and also allows them to switch between end markets.

Luis von Ahn on reCaptcha and Duolingo

Patrizia pointed me to a TEDxCMU talk by Luis von Ahn on The Next Chapter in Human Computation. von Ahn is known for Captcha and reCaptcha (which he talks about in the first 8 minutes of the talk.) In this talk he introduces his team’s new crowdsourcing project duolingo which aims to translate the web while teaching people a second language. Instead of paying $500 for RosettaStone software you can learn a language by translating progressively more complex sentences from the web.

von Ahn also calls this a “Fair Business Model for Education”. (There is actually a slide with this phrase.) His argument is that since most of the world doesn’t have the money for software, duolingo presents a fair way for them to contribute labour in return for learning a language. I note that the fair business model could apply not just to language education, but other types of education. How could you monetize the teaching of philosophy (or ethics)? What would people do to learn that could also benefit someone else?

Are video games just propaganda and training tools for the military? | Technology | The Guardian

The Guardian has a good story on Are video games just propaganda and training tools for the military?. Despite the title, the story doesn’t really take sides. It documents the variety of ways that game companies and armies interact from companies like Kuma Games that make Kuma\War games out of current events. The article also links to an interesting grant award to Kuma from the Department of Defense for game-based second language training:

Utilizing our tools, experience, and huge library of existing 3D assets we can provide an effective, cost-efficient, rapidly-deployable and easily updatable language retention toolset for trainers and Soldiers deployed around the world. It is our intention to refresh languages skills in an intense and immersive 3D environment, which would be made available as part of an online/offline language exercise portal …

The article also documents games made from an Arab perspective by companies in Syria, developers tied to the Hezbollah and Irani organizations. These games (whether the Kuma\War games or those from other perspectives) can be seen as soft propaganda – making normal attitudes about who is good and bad. Such games don’t train people the way simulators might, but they can recruit people or legitimize a cause.

One could argue that just as in the Cold War the “soft power” of American movies played a key role, so in the percieved conflict with terrorism software is playing a similar propaganda role. The problem may be that the wrong people are being portrayed as the bad guys. The propaganda on both sides may be too crude and may make reconciliation harder.

In the meantime Iran has sentenced one of the Kuma designers to death accusing him of developing for the CIA. The game gets serious for this poor designer who was detained when he visited family in Iran.

 

In evaluating digital humanities, enthusiasm may outpace best practices – Inside Higher Ed

Inside Higher Ed has a story by Steve Kolowich about the essays we published in the MLA journal Profession on evaluating digital scholarship. The story, The Promotion That Matters (Jan. 4, 2012) quotes my essay On the Evaluation of Digital Media as Scholarship about how the problem is now how to practically review digital scholarship if you have no experience with it (and are on a tenure and promotion committee.)

It’s interesting how the article begins with what is becoming a trope – that the digital humanities is the new new thing. This time we have no lesser pundit than Stanley Fish proclaiming the arrival of new newness. Kolowich opens the essay (which is mostly about evaluation) by talking about how Fish, the “self-appointed humanities ambassador”, says the digital humanities has replaced postmodernism as the next thing. Its a nice way to open a story on the digital humanities and I suspect we will see more of this opening for a year or two. (What will it mean when people can’t start their stories this way?)

As for Fish, check out his blog post on the MLA, The Old Order Changeth (Dec. 26, 2011). The essay is based on reading the program (rather than attending) and he notices, among other things, all the digital humanities sessions. As he puts it, after reminding us what it was like when postmodernism was the rage,

So what exactly is that new insurgency? What rough beast has slouched into the neighborhood threatening to upset everyone’s applecart? The program’s statistics deliver a clear answer. Upward of 40 sessions are devoted to what is called the “digital humanities,” an umbrella term for new and fast-moving developments across a range of topics: the organization and administration of libraries, the rethinking of peer review, the study of social networks, the expansion of digital archives, the refining of search engines, the production of scholarly editions, the restructuring of undergraduate instruction, the transformation of scholarly publishing, the re-conception of the doctoral dissertation, the teaching of foreign languages, the proliferation of online journals, the redefinition of what it means to be a text, the changing face of tenure — in short, everything.

I’m intrigued by the possibility that the digital humanities might sweep through with the same arrogance that theory did. (Did it?) Is DH the same sort of new new thing? Fish lists some of the symptoms we might see if the digital humanities drives through like another revolution:

Those who proclaimed the good news in 20-minute talks at the convention welcomed the dawning of a brave new world; those who heard them with dismay felt that the world they knew and labored in quite happily was under assault, and they reacted, in counterpoint 20-minute talks, by making the arguments defenders of an embattled regime always make: it’s just a passing fad; everything heralded as new can be found in Plato and Aristotle; what is proclaimed as liberating is actually the abandonment of reason and rigor; a theory that preaches the social construction of everything collapses under its own claims; the stuff is unreadable; it has no content apart from its obfuscating jargon; maybe it will just go away.

I hope Fish is wrong. My hope is that colleagues not interested in the digital realize that we are not threatening to replace other forms of scholarship so much as to extend it. Digital practice does not deconstruct other practices/theories/methodologies, it supplements them and re-engages them. From the perspective of practice one of the things that exemplifies the digital humanities is that it is often experienced in projects that bring together “traditional” scholars and digital humanists rather than develop as a confrontation. Some of the more enthusiastic may think that the digital humanities can replace the practices of the last generation and in some cases the digital humanities does raise new questions, but the history of computing in the humanities has never been confrontational. (Instead, I would argue that the digital humanities has been a little too servile, pretending that all we wanted to do was bring new methods to old problems.) Our disciplinary history is that of a prosthesis or monster stitched from the old and the new. For that reason I doubt we will be the same sort of new new thing that postmodernism was. We don’t pretend to attack the foundations of the humanities so much as to extend them. We need our colleagues rather than despise them. We spend our time reaching outside of the humanities rather than gazing into its navel.

No … the danger is not that the digital humanities will try to deconstruct what came before, but that it introduces a new form of busy-ness to the humanities that distracts humanists from whatever is truly important. (And we all know what that is … don’t we?) The digital humanities is endlessly complicated, especially because it draws limbs from alien fields like the sciences and engineering. DH introduces new jargon, new languages (as in programming languages), new techniques, new practices, and new communal projects. All of this newness will keep us busy keeping up. All this newness will seem too much for many who haven’t the time to embrace something so time-consuming no matter how friendly. Many will keep quiet for fear that others think they are stupid because they don’t get computing when really they haven’t the time to do both the old, the new and the new new. Others will practice some cute put down just like all the cute ways we put down other movements we haven’t the time to master. Most will just feel the hug of the digital is a bit too friendly and a bit too tight. They will wait it out until one day something else will be announced as the new form of new new.

So what do I mean by busy-ness being the danger rather than replacement? Busy-ness is my word for the danger of constant activity Heidegger saw in “The Age of the World Picture” though there is something envious and cynical to his characterization of this technical turn. It is the danger of hyperpedantry when you perform the activities of wisdom faster and faster rather than thinking through wisdom. It really isn’t a new danger (Plato, of course, also warned us about this.) It is the danger that those tired of being left behind warn others about in the hopes they will slow down. It is a danger too often voiced by the grant envious which means we don’t listen too them. Here is Heidegger on it,

The decisive unfolding of the character of modern science as constant activity produces, therefore, a human being of another stamp. The scholar disappears and is replaced by the researcher engaged in research programs. These, and not the cultivation of scholarship, are what places his work at the cutting edge. The researcher no longer needs a library at home. He is, moreover, constantly on the move. He negotiates at conferences and collects information at congresses. He commits himself to publishers’ commissions. It is publishers who now determine which books need to be written.

From an inner compulsion,the researcher presses forward into the sphere occupied by the figure of, in the essential sense, the technologist. Only in this way can he remain capable of being effective, and only then, in the eyes of his age, is he real. Alongside him, an increasingly thinner and emptier romanticism of scholarship and the university will still be able to survive for some time at certain places. (p. 64, from the collection Off the Beaten Track, trans. by Julian Young and Kenneth Haynes.)

Whatever Heidegger says, I believe it is impossible to distinguish between busy-ness and whatever is considered “real work” or real scholarship. The difference is ineffable, but that is not why busy-ness is a danger to the digital humanities. Busy-ness is the danger because it is the other of technical activity. Practical activity is what the humanities needs after theory, but also what it will tire of. At the very moment when we think the digital humanities has made a pragmatic difference we will worry that there is no meaning to all the technique. The digital humanities will not be critiqued as another replacement or another post post; it will exhaust itself and be found empty. The rhetoric will turn to wisdom and away from best practices.

Internet use and transactive memory – Contemplative Computing

From Humanist I was led to a good summary blog entry on Internet use and transactive memory. Transactive memory is a group or stored memory that we depend on instead of remember the information itself. We do this all the time (even before computers) when, for example, we depend on a cookbook for a recipe we have used before, but can’t be bothered to memorize. Given books like Carr’s The Shallows, there is debate about whether Google and the internet as transactive memory is making us stupider.

The real question is not whether offloading memory to other people or to things makes us stupid; humans do that all the time, and it shouldn’t be surprising that we do it with computers. The issues, I think, are 1) whether we do this consciously, as a matter of choice rather than as an accident; and 2) what we seek to gain by doing so.

This entry was sparked by recent news of research results on this subject by Dr. Sparrow and others (see YouTube interview). You can see Carr’s blog entry at Rough Type: Nicholas Carr’s Blog: Minds like sieves. Carr seems to think this reinforces his view that we are shifting to depending too much on technological transactive memory. Sparrow is more careful about drawing conclusions. We may have always depended on transactive memory, but are focusing now on one type – the internet. In Plato’s Phaedrus Socrates focused on writing as the technology tempting us to forget.

Of course, forgetting costs so we may not have to worry if we don’t want to pay.