Who wants to farm potatoes in the metaverse? Exploring Roblox’s corporate hell-worlds

Everyone from Samsung to Victoria’s Secret is getting in on Roblox. We hunted down the very worst branded experiences in the all-ages game platform (and an unofficial Ryanair world)

Rich Pelley of the Guardian has a nice article about the worst corporate games in Roblox, Who wants to farm potatoes in the metaverse? Exploring Roblox’s corporate hell-worldsCanada’s McCain’s Farms of the Future, for example, explains regenerative farming of potatoes. You can see McCain’s Regen Fries site here.

This use of a virtual gaming platform for advertising reminds me of the way Second Life was used by companies to build virtual advertising real estate. Once a space becomes popoular the advertisers follow.

Jeff Pooley, “Surveillance Publishing”

Arun sent me the link to a good paper by Jeff Pooley on Surveillance Publishing in the Journal of Electronic Publishing. The article compares what Google does to rank pages based on links to citation analysis (which inspired Brin and Page). The article looks at how both web search and citation analysis have been monetized by Google and citation network services like Web of Science. Now publishing companies like Elsevier make money off tools that report and predict on publishing. We write papers with citations and publish them. Then we buy services built on our citational work and administrators buy services telling them who publishes the most and where the hot areas are. As Pooley puts it,

Siphoning taxpayer, tuition, and endowment dollars to access our own behavior is a financial and moral indignity.

The article also points out that predictive services have been around since before Google. The insurance and credit rating businesses have used surveillance for some time.

Pooley ends by talking about how these publication surveillance tools then encourage quantification of academic work and facilitate local and international prioritization. The Anglophone academy measures things and discovers itself so it can then reward itself. What gets lost is the pursuit of knowledge.

In that sense, the “decision tools” peddled by surveillance publishers are laundering machines—context-erasing abstractions of our messy academic realities.

The full abstract is here:

This essay develops the idea of surveillance publishing, with special attention to the example of Elsevier. A scholarly publisher can be defined as a surveillance publisher if it derives a substantial proportion of its revenue from prediction products, fueled by data extracted from researcher behavior. The essay begins by tracing the Google search engine’s roots in bibliometrics, alongside a history of the citation analysis company that became, in 2016, Clarivate. The essay develops the idea of surveillance publishing by engaging with the work of Shoshana Zuboff, Jathan Sadowski, Mariano-Florentino Cuéllar, and Aziz Huq. The recent history of Elsevier is traced to describe the company’s research-lifecycle data-harvesting strategy, with the aim to develop and sell prediction products to unviersity and other customers. The essay concludes by considering some of the potential costs of surveillance publishing, as other big commercial publishers increasingly enter the predictive-analytics business. It is likely, I argue, that windfall subscription-and-APC profits in Elsevier’s “legacy” publishing business have financed its decade-long acquisition binge in analytics. The products’ purpose, moreover, is to streamline the top-down assessment and evaluation practices that have taken hold in recent decades. A final concern is that scholars will internalize an analytics mindset, one already encouraged by citation counts and impact factors.

Source: Pooley | Surveillance Publishing | The Journal of Electronic Publishing

Wordle – A daily word game

Wordle Logo

Guess the hidden word in 6 tries. A new puzzle is available each day.

Well … I finally played Wordle – A daily word game after reading about it. It was a nice clean puzzle that got me thinking about vowels. I like the idea that there is one a day as I was immediately tempted to try another and another … Instead the one-a-day gives it a detachment. I can see why the New York Times would buy it, it is the sort of game that would bring in potential subscribers.

Right Research: Modelling Sustainable Research Practices in the Anthropocene – Open Book Publishers

This timely volume responds to an increased demand for environmentally sustainable research, and is outstanding not only in its interdisciplinarity, but its embrace of non-traditional formats, spanning academic articles, creative acts, personal reflections and dialogues.

Open Book Publishers has just published the book I helped edit, Right Research: Modelling Sustainable Research Practices in the Anthropocene. The book gathers essays that came out of the last Around the World Conference that the Kule Institute for Advanced Research ran on Sustainable Research.

The Around the  World econferences we ran were experiments in trying to find a more sustainable way to meet and exchange ideas that involved less flying. It is good to see this book out in print.

Editorial for IRIE Vol. 29 – The International Review of Information Ethics

A short editorial I wrote for the International Review of Information Ethics (IRIE) was just published, Editorial: On IRIE Vol. 29In it I talk about how we need to get beyond principles in the ethics of artificial intelligence as the Google Duplex story shows.

The editorial was for the second part of a collection of articles that came out of a conference that the Kule Institute for Advanced Study organized on AI, Ethics and Society in 2019.

I should add that KIAS has helped move the IRIE from its previous home to the open journal platform run by the University of Alberta Library. We are grateful for the fabulous support from the UofA Library.

JSTOR Text Analyzer

JSTOR, and some other publishers of electronic research, have started building text analysis tools into their publishing tools. I came across this at the end of a JSTOR article where there was a link to “Get more results on Text Analyzer” which leads to a beta of the JSTOR labs Text Analyzer environment.

JSTOR Labs Text Analyzer

This analyzer environment provides simple an analytical tools for surveying an issue of a journal or article. The emphasis is on extracting keywords and entities so that one can figure out if an article or journal is useful. One can use this to find other similar things.

Results of Text Analyzer

What intrigues me is this embedding of tools into reading environments which is different from the standard separate data and tools model. I wonder how we could instrument Voyant so that it could be more easily embedded in other environments.

Towards monocultural (digital) Humanities?

lenguas-utilizadas-revistas-hd
Domenico Fiormonte has written a nice essay on how the humanities (and digital humanities) run the risk of becoming monolingual, Towards monocultural (digital) Humanities?. The essay is a response to Greg Crane’s The Big Humanities, National Identity and the Digital Humanities in Germany and Greg responds then to Domenico here. The numbers are depressing (see the graphs from Domenico above). As he puts it (drawing on research with a colleague into DH journals):

These data show that the real problem is not that English is the dominant language of academic publications (and of DH), but that both Anglophone and a high percentage of non-Anglophone colleagues barely use/quote non-Anglophone sources in their research.

I can’t help thinking that the internet has allowed the big to get even bigger. The dominance of English in academic circles is exacerbated by the instant availability of English research. National languages don’t even have location as an advantage on the internet.

What can we do about it? Miran had a nice reply on Humanist (to the original posting by Greg Crane that was also on Humanist.) Domenico suggests that we all have to take some responsibility, especially those of us who have the “free ride” of being native English writers.

It is the responsibility of dominant languages and cultures to translate from marginal or less influential languages.

Traсes – Augmented reality gifts

From a New Scientist article I learned about Traсes. Traces lets you leave a bundle of information (like a song and some greetings) for someone at a particular GPS location (and at a particular time.) You can thus use it to add gifts for other people to find. It strikes me a neat use of augmented reality. I can imagine all sorts of uses for it beyond gifts:

  • One could use it to leave information about a place.
  • It could be used by artists to leave AR works as imagined by William Gibson in Spook Country.
  • One could create alternate reality games with it.

Alas, it is not available in the Canadian App Store.

A World Digital Library Is Coming True!

Robert Darnton has a great essay in The New York Review of Books titled, A World Digital Library Is Coming True! This essay asks about publication and the public interest. He mentions how expensive some journals are getting and how that means that knowledge paid for by the public (through support for research) becomes inaccessible to the very same public which might benefit from the research.

In the US this trend has been counteracted by initiatives to legislate that publicly funded research be made available through some open access venue like PubMed Central. Needless to say lobbyists are fighting such mandates like the Fair Access to Science and Technology Research Act (FASTR).

Darnton concludes that “In the long run, journals can be sustained only through a transformation of the economic basis of academic publishing.” He argues for “flipping” the costs and charging processing fees to those who want to publish.

By creating open-access journals, a flipped system directly benefits the public. Anyone can consult the research free of charge online, and libraries are liberated from the spiraling costs of subscriptions. Of course, the publication expenses do not evaporate miraculously, but they are greatly reduced, especially for nonprofit journals, which do not need to satisfy shareholders. The processing fees, which can run to a thousand dollars or more, depending on the complexities of the text and the process of peer review, can be covered in various ways. They are often included in research grants to scientists, and they are increasingly financed by the author’s university or a group of universities.

While I agree on the need to focus on the public good, I worry that “flipping” will limit who gets published. In STEM fields where most research is funded one can build the cost of processing fees into the funding, but in the humanities where much research is not funded, many colleagues will have to pay out of pocket to get published. Darnton mentions how at Harvard (his institution) they have a program that subsidizes processing fees … they would, and therein lies the problem. Those at wealthy institutions will now have an advantage in that they can afford to publish in an environment where publishers need processing fees while those not subsidized (whether private scholars, alternative academics, or instructors) will have to decide if they really can afford to. Creating an economy where it is not the best ideas that get published but those of an elite caste is not a recipe for the public good.

I imagine Darnton recognizes the need for solutions other than processing fees and, in fact, he goes on to talk about the Digital Public Library of America and OpenEdition Books as initiatives that are making monographs available online for free.

I suspect that what will work in the humanities is finding funding for the editorial and publishing functions of journals as a whole rather than individual articles. We have a number of journals in the digital humanities like Digital Humanities Quarterly where the costs of editing and publishing are borne by individuals like Julian Flanders who have made it a labor of love, their universities that support them, and our scholarly association that provides technical support and some funding. DHQ doesn’t charge processing fees which means that all sorts of people who don’t have access to subsidies can be heard. It would be interesting to poll the authors published and see how many have access to processing fee subsidies. It is bad enough that our conferences are expensive to attend, lets not skew the published record.

Which brings me back to the public good. Darnton ends his essay writing about how the DPLA is networking all sorts of collections together. It is not just providing information as a good, but bringing together smaller collections from public libraries and universities. This is one of the possibilities of the internet – that distributed resources can be networked into greater goods rather than having to be centralized. The DPLA doesn’t need to be THE PUBLIC LIBRARY that replaces all libraries the way Amazon is pushing out book stores. The OpenEdition project goes further and offers infrastructure for publishing knowledge to keep costs down for everyone. A combination of centrally supported infrastructure that is used by editors that get local support (and credit) will make more of a difference than processing fees, be more equitable, and do more for public participation, which is a good too.

A Short History of the Highrise

The New York Times and the National Film Board (of Canada) have collaborated on a great interactive A Short History of the Highrise. The interactive plays as a documentary that you can stop at any point to explore details. The director, Katerina Cizek, on the About page talks about their inspiration:

I was inspired by the ways storybooks have been reinvented for digital tablets like the iPad. We used rhymes to zip through history, and animation and interactivity to playfully revisit a stunning photographic collection and reinterpret great feats of engineering.

For the NFB this is part of their larger Highrise many-media project.