I remember the beginnings of computer-assisted presentations. My unit at the University of Toronto Computing Services experimented with the first tools and projectors. The three-gun projectors were finicky to set up and I felt a little guilty promoting set ups which I knew would take lots of technical support. In one presentation on digital presentations there was actually a colleague under the table making sure all the technology worked while I pitched it to faculty.
Alas, PowerPoint came to dominate though now we have a bunch of innovative presentation tools that work on the web from Google Sheets to Prezi.
Now back to Tufte. His critique still stands. Presentation tools have a cognitive style that encourages us to break complex ideas into chunks and then show one chunk at a time in a linear sequence. He points out that a well designed handout or pamphlet (like his pamphlet on The Cognitive Style of PowerPoint) can present a lot more information in a way that doesn’t hide the connections. You can have something more like a concept map that you take people through on a tour. Prezi deserves credit for paying attention to Tufte and breaking out of the linear style.
Now, of course, there are AI tools that can generate presentations like Presentations.ai or Slideoo. You can see a list of a number of them here. No need to know what you’re presenting, an AI will generate the content, design the slides, and soon present it too.
I gave a talk on “The Knowledge We Bear” that looked at four of the main structures that discipline the ways we bear knowledge in the university as institution. I also moderated a dialogue between Kevin Kee and Jacques Beauvais.
The three days were extraordinary thanks to the leadership of my co-organizer Natalie Loveless. I learned a lot about the weaving of research and creation together.
In many ways this was my last major initiative as Director of KIAS. On July 1st Michael O’Driscoll will take over. It was a way of reflecting on institutes and what they can do with others. I’m grateful to all those who participated.
U of A computing scientists work with Japanese researchers to refine a virtual and mixed reality video game that can improve motor skills for older adults and sedentary people.
The Folio of the University of Alberta published a story about a trip to Japan that I and others embarked on, U of A computing scientists work with Japanese researchers on virtual reality game to get people out of their seats. Ritsumeikan invited us to develop research collaborations around gaming, language and artificial intelligence. Our visit was a chance to further the collaborations, like the one my colleagues Eleni Stroulia and Victor Fernandez Cervantes are developing with Thawmas Ruck around games for older adults. This inter-university set of collaborations build on projects I was involved in going back to 2011, including a conference (Replaying Japan) and a journal, the Journal of Replaying Japan.
The highlight was the signing of a Memorandum Of Understanding by the two presidents (of U of A and Ritsumeikan). I was also involved as was Professor Nakamura. May the collaboration thrive.
In spheres as disparate as medicine and cryptocurrencies, “do your own research,” or DYOR, can quickly shift from rallying cry to scold.
The New York Times has a nice essay by John Herrman on They Did Their Own ‘Research.’ Now What?The essay talks about the loss of trust in authorities and the uses/misuses of DYOR (Do Your Own Research) gestures especially in discussions about cryptocurrencies. DYOR seems to act rhetorically as:
Advice that readers should do research before making a decision and not trust authorities (doctors, financial advisors etc).
A disclaimer that readers should not blame the author if things don’t turn out right.
A scold to or for those who are not committed to whatever it is that is being pushed as based on research. It is a form of research signalling – “I’ve done my research, if you don’t believe me do yours.”
A call to join a community of instant researchers who are skeptical of authority. If you DYOR then you can join us.
A call to process (of doing your own research) over truth. Enjoy the research process!
Become an independent thinker who is not in thrall to authorities.
DYOR is an attitude, if not quite a practice, that has been adopted by some athletes, musicians, pundits and even politicians to build a sort of outsider credibility. “Do your own research” is an idea central to Joe Rogan’s interview podcast, the most listened to program on Spotify, where external claims of expertise are synonymous with admissions of malice. In its current usage, DYOR is often an appeal to join in, rendered in the language of opting out.
The question is whether reading around is really doing research or whether it is selective listening. What does it mean to DYOR in the area of vaccines? It seems to mean not trusting science and instead listening to all sorts of sympathetic voices.
What does this mean about the research we do in the humanities. Don’t we sometimes focus too much on discourse and not give due weight to the actual science or authority of those we are “questioning”? Haven’t we modelled this critical stance where what matters is that one overturns hierarchy/authority and democratizes the negotiation of truth? Irony, of course, trumps all.
From a CGSA/ACÉV Statement Against Exploitation and Oppression in Games Education and Industry a link to a video report People Make Games. The report documents emotional abuse in the education and indie game space. It deals with how leaders can create a toxic environment and how they can fail to take criticism seriously. A myth of the “auteur” in game design then protects the superstar leaders. Which is why they called the video “people make games” (not single auteurs.) Watch it.
A Hong Kong company has developed facial expression-reading AI that monitors students’ emotions as they study. With many children currently learning from home, they say the technology could make the virtual classroom even better than the real thing.
With cameras all over, this should worry us. We are not only be identified by face recognition, but now they want to know our inner emotions too. What sort of theory of emotions licenses these systems?
A new set of online games holds promise for helping identify and prevent harmful misinformation from going viral.
Instead of fighting misinformation after it’s already spread, some researchers have shifted their strategy: they’re trying to prevent it from going viral in the first place, an approach known as “prebunking.” Prebunking attempts to explain how people can resist persuasion by misinformation. Grounded in inoculation theory, the approach uses the analogy of biological immunization. Just as weakened exposure to a pathogen triggers antibody production, inoculation theory posits that pre-emptively exposing people to a weakened persuasive argument builds people’s resistance against future manipulation.
Prebunking is being touted as A New Way to Inoculate People Against Misinformation. The idea is that one can inoculate people against the manipulation of misinformation. This strikes me as similar to how we were taught to “read” advertising in order to inoculate us to corporate manipulation. Did it work?
What’s the game like? The game feels like a branching, choose-your-own-adventure under the hood where a manager walks you through what might do or not and then complements you when you are a good troll. There is a ticker so you can see the news about Harmony Square. It feels a bit pedantic when the managerial/editorial voice says things like “Kudos for paying attention to buzzwords. You ignored the stuff that isn’t emotionally manipulative.” Still, the point is to understand what can be done to manipulate a community so that you are inoculated against it.
An important point made by the article is that games, education and other interventions are not enough. Drvier’s education is only part of safe roads. Laws and infrastructure are also important.
I can’t help feeling that we are repeating a pattern of panic and then literacy proposals in the face of new media politics. McLuhan drew our attention to manipulation by media and advertising and I remember well intentioned classes on reading advertising like this more current one. Did they work? Will misinformation literacy work now? Or, is the situation more complex with people like Trump willing to perform convenient untruths?
Whatever the effectiveness of games or literacy training, it is interesting how “truth” has made a comeback. At the very moment when we seem to be witnessing the social and political construction of knowledge, we are hearing calls for truth.
Today was the third day of a symposium I helped organize on Ethics in the Age of Smart Systems. For this we experimented with first organizing a “dialogue” or informal paper and discussion on a topic around AI ethics once a month. These led into the symposium that ran over three days. We allowed for an ongoing conversation after the formal part of the event each day. We were also lucky that the keynotes were excellent.
Veena Dubal talked about Proposition 22 and how it has created a new employment category of those managed by algorithm (gig workers.) She talked about how this is a new racial wage code as most of the Uber/Lyft workers are people of colour or immigrants.
Virginia Dignum talked about how everyone is announcing their principles, but these principles are enough. She talked about how we need standards; advisory panels and ethics officers; assessment lists (checklists); public awareness; and participation.
Rafael Capurro gave a philosophical paper about the smart in smart living. He talked about metis (the Greek for cunning) and different forms of intelligence. He called for hesitation in the sense of taking time to think about smart systems. His point was that there are time regimes of hype and determinism around AI and we need to resist them and take time to think freely about technology.
But despite data science’s exciting possibilities, plenty of other academics object to it
The Economist has a nice Christmas Special on the Digital humanities – How data analysis can enrich the liberal arts. The article tells a bit of our history (starting with Busa, of course) and gives examples of new work like that of Ted Underwood. The note criticism about how DH may be sucking up all the money or corrupting the humanities, but they also point out how little DH gets from the NEH pot (some $60m out of $16bn) which is hardly evidence of a take over. The truth is, as they note, that the humanities are under attack again and the digital humanities don’t make much of a difference either way. The neighboring fields that I see students moving to are media arts, communication studies and specializations like criminology. Those are the threats, but also sanctuaries for the humanities.