John Roach, Pioneer of the Personal Computer, Is Dead at 83 – The New York Times

He helped make the home computer ubiquitous by introducing the fully assembled Tandy TRS-80, which was so novel at the time that it became a museum piece.

The New York Times reports that John Roach, Pioneer of the Personal Computer, Is Dead at 83Roach was the executive who introduced the Tandy TRS-80 in the 1970s, one of the first fully assembled microcomputers. I didn’t realize how dominant the TRS-80 was in the late 1970s. At one point it held 40% of the market. We usually hear about Apple and IBM, but not about the TRS (Tandy Radio Schack).

They later released a laptop or tablet computer that I lusted after, the TRS80 Model 100. This was a keyboard and a small LCD screen and enough software to type notes or edit text. There was also a modem to send your writing somewhere. I still think this form factor makes sense. You can’t really type on an iPad (unless you get a keyboard for it) and you don’t really need lots of screen for typing notes.

People Make Games

From a CGSA/ACÉV Statement Against Exploitation and Oppression in Games Education and Industry a link to a video report People Make Games. The report documents emotional abuse in the education and indie game space. It deals with how leaders can create a toxic environment and how they can fail to take criticism seriously. A myth of the “auteur” in game design then protects the superstar leaders. Which is why they called the video “people make games” (not single auteurs.) Watch it.

Jeanna Matthews 

Jeanna Matthews from Clarkson College gave a great talk at our AI4Society Ethical Data and AI Salon on “Creating Incentives for Accountability and Iterative Improvement in Automated-Decision Making Systems.” She talked about a case regarding DNA matching software for criminal cases that she was involved in where they were able to actually get the code and show that the software would, under certain circumstances, generate false positives (where people would have their DNA matched to that from a crime scene when it shouldn’t have.)

As the title of her talk suggests, she used the concrete example to make the point that we need to create incentives for companies to test and improve their AIs. In particular she suggested that:

  1. Companies should be encouraged/regulated to invest some of the profit they make from the efficiencies from AI in improving the AI.
  2. That a better way to deal with the problems of AIs than weaving humans into the loop would be to set up independent human testers who test the AI and have a mechanism of redress. She pointed out how humans in the loop can get lazy, can be incentivized to agree with the AI and so on.
  3. We need regulation! No other approach will motivate companies to improve their AIs.

We had an interesting conversation around the question of how one could test point 2. Can we come up with a way of testing which approach is better?

She shared a link to a collection of links to most of the relevant papers and information: Northwestern Panel, March 10 2022.

Michael GRODEN Obituary

I just found out that Michael GRODEN (1947 – 2021) passed away a year ago. Groden was a member of CSDH/SCHN when it was called COCH/COSH and gave papers at our conferences. He developed an hypertext version of Ulysses that was never published because of rights issues. He did, however, talk about it. He did, however, publish about his ideas about hypertext editions of complex works like Ulysses. See his online CV for more.

Replication, Repetition, or Revivification

A short essay I wrote with Stéfan Sinclair on “Recapitulation, Replication, Reanalysis, Repetition, or Revivification” is now up in preprint form. The essay is part of a longer work on “Anatomy of tools: A closer look at ‘textual DH’ methodologies.” The longer work is a set of interventions looking at text tools. These came out of a ADHO SIG-DLS (Digital Literary Studies) workshop that took place in Utrecht in July 2019.

Our intervention at the workshop had the original title “Zombies as Tools: Revivification in Computer Assisted Interpretation” and concentrated on practices of exploring old tools – a sort of revivification or bringing back to life of zombie tools.

The full paper should be published soon by DHQ.

The Universal Paperclips Game

Just finished playing the Universal Paperclips game which was surprisingly fun. It took me about 3.5 hours to get to sentience. The idea of the game is that you are an AI running a paperclip company and you make decisions and investments. The game was inspired by the philosopher Nick Bostrom‘s paperclip maximizer thought experiment which shows the risk that some harmless AI that controls the making of paperclips might evolve into an AGI (Artificial General Intelligence) and pose a risk to us. It might even convert all the resources of the universe into paperclips. The original thought experiment is in Bostrom’s paper Ethical Issues in Advanced Artificial Intelligence to illustrate the point that “Artificial intellects need not have humanlike motives.”

Human are rarely willing slaves, but there is nothing implausible about the idea of a superintelligence having as its supergoal to serve humanity or some particular human, with no desire whatsoever to revolt or to “liberate” itself. It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal. For better or worse, artificial intellects need not share our human motivational tendencies.

The game is rather addictive despite having a simple interface where all you can do is click on buttons making decisions. The decisions you get to make change over time and there are different panels that open up for exploration.

I learned about the game from an interesting blog entry by David Rosenthal on how It Isn’t About The Technology which is a response to enthusiasm about Web 3.0 and decentralized technologies (blockchain) and how they might save us, to which Rosenthal responds that it is isn’t about the technology.

One of the more interesting ideas that Rosenthal mentions is from Charles Stross’s keynote for the 34th Chaos Communications Congress to the effect that businesses are “slow AIs”. Corporations are machines that, like the paperclip maximizer, are self-optimizing and evolve until they are dangerous – something we are seeing with Google and Facebook.

Wordle – A daily word game

Wordle Logo

Guess the hidden word in 6 tries. A new puzzle is available each day.

Well … I finally played Wordle – A daily word game after reading about it. It was a nice clean puzzle that got me thinking about vowels. I like the idea that there is one a day as I was immediately tempted to try another and another … Instead the one-a-day gives it a detachment. I can see why the New York Times would buy it, it is the sort of game that would bring in potential subscribers.

Ottawa’s use of our location data raises big surveillance and privacy concerns

In order to track the pandemic, the Public Health Agency of Canada has been using location data without explicit and informed consent. Transparency is key to building and maintaining trust.

The Conversation has just published an article on  Ottawa’s use of our location data raises big surveillance and privacy concerns. This was written with a number of colleagues who were part of a research retreat (Dagstuhl) on Mobility Data Analysis: from Technical to Ethical.

We are at a moment when ethical principles are really not enough and we need to start talking about best practices in order to develop a culture of ethical use of data.

We Might Be in a Simulation. How Much Should That Worry Us?

We may not be able to prove that we are in a simulation, but at the very least, it will be a possibility that we can’t rule out. But it could be more than that. Chalmers argues that if we’re in a simulation, there’d be no reason to think it’s the only simulation; in the same way that lots of different computers today are running Microsoft Excel, lots of different machines might be running an instance of the simulation. If that was the case, simulated worlds would vastly outnumber non-sim worlds — meaning that, just as a matter of statistics, it would be not just possible that our world is one of the many simulations but likely.

The New York Times has a fun opinion piece to the effect that We Might Be in a Simulation. How Much Should That Worry Us? This follows on Nick Bostrom’s essay Are you living in a computer simulation? that argues that either advanced posthuman civilizations don’t run lots of simulations of the past or we are in one.

The opinion is partly a review of a recent book by David Chalmers, Reality+: Virtual Worlds and the Problems of Philosophy (which I haven’t read.) Chalmers thinks there is a good chance we are in a simulation, and if so, there are probably others.

I am also reminded of Hervé Le Tellier’s novel The Anomaly where a plane full of people pops out of the clouds for the second time creating an anomaly where there are two instances of each person on the plane. This is taken as a glitch that may indicate that we are in a simulation raising all sorts of questions about whether there are actually anomalies that might be indications that this really is a simulation or a complicated idea in God’s mind (think Bishop Berkeley’s idealism.)

For me the challenge is the complexity of the world I experience. I can’t help thinking that a posthuman society modelling things really doesn’t need such a rich world as I experience. For that matter, would there really be enough computing to do it? Is this simulation fantasy just a virtual reality version of the singularity hypothesis prompted by the new VR technologies coming on stream?