A groundbreaking study shows kids learn better on paper, not screens. Now what?

For ‘deeper reading’ among children aged 10-12, paper trumps screens. What does it mean when schools are going digital?

The title of this Guardian story says it all, A groundbreaking study shows kids learn better on paper, not screens. Now what? The story reports on a study led by Karen Froud at Columbia University titled, Middle-schoolers’ reading and processing depth in response to digital and print media: An N400 study. They found “evidence of differences in brain responses to texts presented in print and digital media, including deeper semantic encoding for print than digital texts.” Paper works better.

John Gabrieli, an MIT neuroscientist who is skeptical about the promises of big tech and its salesmen: “I am impressed how educational technology has had no effect on scale, on reading outcomes, on reading difficulties, on equity issues,”…

OpenAI’s GPT store is already being flooded with AI girlfriend bots

OpenAI’s store rules are already being broken, illustrating that regulating GPTs could be hard to control

From Slashdot I learned about a stroy on how OpenAI’s GPT store is already being flooded with AI girlfriend bots. It isn’t particularly surprising that you can get different girlfriend bots. Nor is it surprising that these would be something you can build in ChatGPT-4. ChatGPT is, afterall, a chatbot. What will be interesting to see is whether these chatbot girlfriends are successful. I would have imagined that men would want pornographic girlfriends and that the market for friends would be more for boyfriends along the lines of what Replika offers.

Column: AI investors say they’ll go broke if they have to pay for copyrighted works. Don’t believe it

AI investors say their work is so important that they should be able to trample copyright law on their pathway to riches. Here’s why you shouldn’t believe them.

Michael Hiltzik has a nice colum about how  AI investors say they’ll go broke if they have to pay for copyrighted works. Don’t believe it. He quotes the venture capital firm investing a lot in AI, Adreessen Horowitz as saying,

The only way AI can fulfill its tremendous potential is if the individuals and businesses currently working to develop these technologies are free to do so lawfully and nimbly.

This is like saying that the businesses of the mafia could fulfill their potential if they were allowed to do so lawfully and nimbly. It also assumes there is tremendous potential, and no pernicious side effects to AI. Do we really know there is positive potential and that it is tremendous?

Hiltzik is quite good on the issue of training on copyrighted material, something playing out as we speak. I suspect that if the courts allow the free use of large content platforms for model training that we will then find these collections of content being sequestered behind license walls that prevent their scraping.

Elon Musk, X and the Problem of Misinformation in an Era Without Trust

Elon Musk thinks a free market of ideas will self-correct. Liberals want to regulate it. Both are missing a deeper predicament.

Jennifer Szalai of the New York Times has a good book review or essay on misinformation and disinformation, Elon Musk, X and the Problem of Misinformation in an Era Without Trust. She writes about how Big Tech (Facebook and Google) benefit from the view that people are being manipulated by social media. It helps sell their services even though there is less evidence of clear and easy manipulation. It is possible that there is an academic business of Big Disinfo that is invested in a story about fake news and its solutions. The problem instead may be a problem of the authority of elites who regularly lie to the US public. This of the lies told after 9/11 to justify the “war on terror”; why should we believe any “elite”?

One answer is to call people to “Do your own research.” Of course that call has its own agenda. It tends to be a call for unsophisticated research through the internet. Of course, everyone should do their own research, but we can’t in most cases. What would it take to really understand vaccines through your own research, as opposed to joining some epistemic community and calling research the parroting of their truisms. With the internet there is an abundance of communities of research to join that will make you feel well-researched. Who needs a PhD? Who needs to actually do original research? Conspiracies like academic communities provide safe haven for networks of ideas.

Meet the Amii Fellows: Geoffrey Rockwell

Learn more about the research and work of Geoffrey Rockwell, one of the latest Fellows to join Amii’s team of world-class researchers. Geoffrey is a professor in both Media Tech Studies and in the philosophy department at the University of Alberta.

The Alberta Machine Intelligence Institute (Amii) has put up a video interview with me and Alona Fyshe designed to introduce new Fellows (like me.) Dr. Fyshe is one of the Fellows who works on machine learning and natural language processing. The interview is at Meet the Fellows: Geoffrey Rockwell.

How AI Image Generators Make Bias Worse – YouTube

A team at the LIS (London Interdisciplinary School) have created a great short video on the biases of AI image generators. The video covers the issues quickly and is documented with references you can follow for more. I had been looking at how image generators portrayed academics like philosophers, but this reports on research that went much further.

What is also interesting is how this grew out of a LIS undergrad’s first year project. It says something about LIS that they encourage and build on such projects. This got me wondering about the LIS which I had never heard of before. It seems to be a new teaching college in London, UK that is built around interdisciplinary programmes, not departments, that deal with “real-world problems.” It sounds a bit like problem-based learning.

Anyway, it will be interesting to watch how it evolves.

Who wants to farm potatoes in the metaverse? Exploring Roblox’s corporate hell-worlds

Everyone from Samsung to Victoria’s Secret is getting in on Roblox. We hunted down the very worst branded experiences in the all-ages game platform (and an unofficial Ryanair world)

Rich Pelley of the Guardian has a nice article about the worst corporate games in Roblox, Who wants to farm potatoes in the metaverse? Exploring Roblox’s corporate hell-worldsCanada’s McCain’s Farms of the Future, for example, explains regenerative farming of potatoes. You can see McCain’s Regen Fries site here.

This use of a virtual gaming platform for advertising reminds me of the way Second Life was used by companies to build virtual advertising real estate. Once a space becomes popoular the advertisers follow.

CEO Reminds Everyone His Company Collects Customers’ Sleep Data to Make Zeitgeisty Point About OpenAI Drama

The Eight Sleep pod is a mattress topper with a terms of service and a privacy policy. The company “may share or sell” the sleep data it collects from its users.

From SlashDot a story about how a CEO Reminds Everyone His Company Collects Customers’ Sleep Data to Make Zeitgeisty Point About OpenAI Drama. The story is worrisome because of the data being gathered by a smart mattress company and the use it is being put to. I’m less sure of the CEO’s (Matteo Franceschetti) inferences from his data and his call to “fix this.” How would Eight Sleep fix this? Sell more product?

Huminfra: The Imitation Game: Artificial Intelligence and Dialogue

Today I gave a talk online for an event organized by Huminfra, a Swedish national infrastructure project. The title of the talk was “The Imitation Game: Artificial Intelligence and Dialogue” and it was part of an event online on “Research in the Humanities in the wake of ChatGPT.” I drew on Turing’s name for the Turing Test, the “imitation game.” Here is the abstract,

The release of ChatGPT has provoked an explosion of interest in the conversational opportunities of generative artificial intelligence (AI). In this presentation Dr. Rockwell will look at how dialogue has been presented as a paradigm for thinking machines starting with Alan Turing’s proposal to test machine intelligence with an “imitation game” now known as the Turing Test. In this context Rockwell will show Veliza a tool developed as part of Voyant Tools (voyant-tools.org) that lets you play and script a simple chatbot based on ELIZA which was developed by Joseph Weizenbaum in 1966. ELIZA was one of the first chatbots with which you could have a conversation. It responded as if a psychotherapist, turning whatever you said back into a question. While it was simple, it could be quite entertaining and thus provides a useful way to understanding chatbots.

PARRY encounters the DOCTOR (RFC439)

V. Cerf set up a dialogue between two of the most famous early chatbots, PARRY encounters the DOCTOR (RFC439) The DOCTOR is the therapist script for Weizenbaum’s ELIZA that is how people usually encounter of ELIZA. PARRY was developed by Kenneth Colby and acts like a paranoid schizophrenic. Putting them into dialogue therefore makes a kind of sense and the result is amusing.

It is also interesting that this is a RFC (Request For Comments), a genre normally reserved for Internet technical documents.