Forty years ago Apple debuted a computer that changed our world, for good or ill | Siva Vaidhyanathan | The Guardian

In many ways, the long 21st century began when Apple launched the Macintosh with its ‘1984’ Super Bowl ad

The Guardian has a story about the 40th anniversary of the Apple Macintosh, Forty years ago Apple debuted a computer that changed our world, for good or ill. The famous 1984 Super Bowl Macintosh ad by Ridley Scott was aired on January 22nd, 1984 and announced that on January 24th, the Macintosh would be introduced.

What made the Mac so revolutionary? To be honest, the Mac wasn’t really that innovative. Apple had tried to sell a GUI (Graphical User Interface) computer before in the Lisa, but it was too expensive. The Lisa in turn had be developed using ideas from Xerox Palo Alto Research Center (PARC) that were marketed in the Xerox Star of 1981, which was again too expensive to be influential. What the Mac got right was the price making it affordable. And the rest was history.

The author of the Guardian article, Siva Vaidhyanathan, argues that the Mac and later the iPhone hid the realities of their manufacture and innards. This was a common critique of the GUI, that it hid the way the operating system “really” worked, which was shown presumably by MS Dos.

This move to magic through design has blinded us to the real conditions of most people working and living in the world. A gated device is similar to a gated community. Beyond that, the sealed boxes, once they included ubiquitous cameras and location devices and were connected through invisible radio signals, operate as a global surveillance system that Soviet dictators could never have dreamed of. We bought into a world of soft control beyond Orwell’s imagination as well.

Frankly, I think the argument is exaggerated. Consumer products like cars had been hiding their workings under a trunk long before the Macintosh. For that matter the IBM PCs running MS Dos of that time were really not more open. The command line is an interface as much as a graphical one, it is just a different paradigm, a dialogue interface where you order the machine around instead of a desktop where you manipulate files. The argument seems to be one of association – associating the Mac with a broad generalization about capitalism and then hinting that everything after can be blamed on us wanting what Apple offered. What I remember was struggling to learn the commands of an IBM and then being offered a better designed computer. Sometimes better design isn’t a surveillance plot.

Huminfra: The Imitation Game: Artificial Intelligence and Dialogue

Today I gave a talk online for an event organized by Huminfra, a Swedish national infrastructure project. The title of the talk was “The Imitation Game: Artificial Intelligence and Dialogue” and it was part of an event online on “Research in the Humanities in the wake of ChatGPT.” I drew on Turing’s name for the Turing Test, the “imitation game.” Here is the abstract,

The release of ChatGPT has provoked an explosion of interest in the conversational opportunities of generative artificial intelligence (AI). In this presentation Dr. Rockwell will look at how dialogue has been presented as a paradigm for thinking machines starting with Alan Turing’s proposal to test machine intelligence with an “imitation game” now known as the Turing Test. In this context Rockwell will show Veliza a tool developed as part of Voyant Tools (voyant-tools.org) that lets you play and script a simple chatbot based on ELIZA which was developed by Joseph Weizenbaum in 1966. ELIZA was one of the first chatbots with which you could have a conversation. It responded as if a psychotherapist, turning whatever you said back into a question. While it was simple, it could be quite entertaining and thus provides a useful way to understanding chatbots.

The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023

Today and tomorrow representatives from a number of countries have gathered at Bletchley Park to discuss AI safety. Close to 30 countries, including Canada were represented and they issued The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023. This declaration starts with,

Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity. To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible.

The declaration discusses opportunities and the need to support innovation, but also mentions that “AI also poses significant risks” and mentions the usual suspects, especially “capable, general-purpose models” that could be repurposed for misuse.

What stands out is the commitment to international collaboration among the major players, including China. This is a good sign.

Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation. We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe, and supports the good of all through existing international fora and other relevant initiatives, to promote cooperation to address the broad range of risks posed by AI.

Bletchley Park is becoming a UK symbol of computing. It was, of course, where the Allied code-breaking centre was set up. It is where Turing worked on the Colossus, an important early computer used to decode the German ciphers and give the Allies a crucial advantage. It is appropriate that UK Prime Minister Sunak has used this site to gather representatives. Unfortunately few leaders joined him there, sending representatives instead, though Trudeau may show up on the 2nd.

Alas, the Declaration is short on specifics though individual countries like the United States and Canada are securing voluntary commitments from players to abide by codes of conduct. China and the EU are also passing laws regulating artificial intelligence.

One thing not mentioned at all are the dangers of military uses of AI. It is as if warbots are off the table in AI safety discussions.

The good news is that there will be follow up meetings at which we can hope that concrete agreements might be worked out.

 

 

 

Lit sounds: U of A experts help rescue treasure trove of audio cultural history

A U of A professor is helping to rescue tens of thousands of lost audio and video recordings — on tape, film, vinyl or any other bygone media — across North America.

The Folio has a nice story about the SpokenWeb project that I am part of, Lit sounds: U of A experts help rescue treasure trove of audio cultural history. The article discusses the collaboration and importance of archiving to scholarship.

History of Information Timeline

An interactive, illustrated timeline of historic moments in humankind’s quest for information. With annotations by Jeremy Norman.

History of Information is a searchable database of events in information. The link will show you the digital humanities category and what the creator thought were important events. I must say that it looks rather biased towards the interventions of white men.

Group hopes to resurrect 128-year-old Cyclorama of Jerusalem, near Quebec City

MONTREAL — The last cyclorama in Canada has been hidden from public view since it closed in 2018, but a small group of people are hoping to revive the unique…

Good News! A Group hopes to resurrect 128-year-old Cyclorama of Jerusalem, near Quebec City. The Cyclorama of Jerusalem is the last/only cyclorama still standing in Canada. I visited and blogged about it back in 2004 when I was able to visit it. Then it closed and now they are trying to restore it and sell it.

Cycloramas are the virtual reality of the 19th century. Long paintings, sometimes with props, were mounted in the round in special buildings that allowed people to feel immersed in a painted space. These remind us of the variety of types of media that have surpassed – the forgotten types of media.

The Emergence of Presentation Software and the Prehistory of PowerPoint

PowerPoint presentations have taken over the world despite Edward Tufte’s pamphlet The Cognitive Style of PowerPoint. It seems that in some contexts the “deck” has become the medium of information exchange rather than the report, paper or memo. In Slashdot I came across a link to a MIT Review essay titled, Next slide, please: A brief history of the corporate presentation. Another history is available from the Computer History Museum, Slide Logic: The Emergence of Presentation Software and the Prehistory of PowerPoint.

I remember the beginnings of computer-assisted presentations. My unit at the University of Toronto Computing Services experimented with the first tools and projectors. The three-gun projectors were finicky to set up and I felt a little guilty promoting set ups which I knew would take lots of technical support. In one presentation on digital presentations there was actually a colleague under the table making sure all the technology worked while I pitched it to faculty.

I also remember tools before PowerPoint. MORE was an outliner and thinking tool that had a presentation mode much the way Mathematica does. MORE was developed by Dave Winer who had a nice page on the history of outline processors he worked on here. It he leaves out how Douglas Engelbart’s Mother of All Demos in 1968 showed something like outlining too.

Alas, PowerPoint came to dominate though now we have a bunch of innovative presentation tools that work on the web from Google Sheets to Prezi.

Now back to Tufte. His critique still stands. Presentation tools have a cognitive style that encourages us to break complex ideas into chunks and then show one chunk at a time in a linear sequence. He points out that a well designed handout or pamphlet (like his pamphlet on The Cognitive Style of PowerPoint) can present a lot more information in a way that doesn’t hide the connections. You can have something more like a concept map that you take people through on a tour. Prezi deserves credit for paying attention to Tufte and breaking out of the linear style.

Now, of course, there are AI tools that can generate presentations like Presentations.ai or Slideoo. You can see a list of a number of them here. No need to know what you’re presenting, an AI will generate the content, design the slides, and soon present it too.

40 years of the Nintendo Famicom – the console that changed the games industry

Entering a crowded field, the Nintendo Famicom came to dominate the market in the 1980s, leaving a family orientated legacy that continues to be felt today

The Guardian has a good story on the 40th anniversary of the Nintendo Famicom, 40 years of the Nintendo Famicom – the console that changed the games industry The story quotes James Newman and also mentions Masayuki Uemura who Newman and I knew through the Replaying Japan conferences. Alas, Uemura, who was at Ritsumeikan after he retired from Nintendo, passed in 2021.

The story points out how Nintendo deliberately promoted the Famicom as a family machine that could be hooked up to the family TV (hence “Fami – com.) In various ways they wanted to legitimize gaming as a family experience. By contrast, when Nintendo brought the machine to North America it was remodelled to look like a VCR and called the Nintendo Entertainment System.

How Canada Accidentally Helped Crack Computer Translation

A technological whodunit—featuring Parliament, computer scientists, and a tipsy plane flight

Arun sent me a link to a neat story about How Canada Accidentally Helped Crack Computer Translation. The story is by Christine Mitchell and is in the Walrus (June 2023). It describes how IBM got ahold of a magnetic reel tape with 14 years of the Hansard – the translated transcripts of the Canadian Parliament. IBM went on to use this data trove to make advances in automatic translation.

The story mentions the politics of automated translation research in Canada. I have previously blogged about the Booths who were recruited by the NRC to Saskatchewan to work on automated translation. They were apparently pursuing a statistical approach like that IBM took later on, but their funding was cut.

Speaking of automatic translation, Canada had a computerized system, METEO for translating daily weather forecasts from Environment Canada. This ran from 1981 to 2001 and was an early successful implementation of automatic translation in the real world. It came out of work at the TAUM (Traduction Automatique à l’Université de Montréal) research group at the Université de Montréal that was set up in the late 1960s.

The case for taking AI seriously as a threat to humanity

From the Open Philanthropy site I came across this older (2020) Vox article, The case for taking AI seriously as a threat to humanity by Kelsey Piper. The article nicely summarizes some of the history of concerns around AGI (Artificial General Intelligence) as people tend to call an AI so advanced it might be comparable to human intelligence. This history goes back to Turing’s colleague I.J. Good who speculated in 1965 that,

An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

Such an explosion has been called the Singularity by Vernor Vinge and was popularized by Ray Kurzweil.

I came across this following threads on the whole issue of whether AI would soon become an existential threat. The question of the dangers of AI (whether AGI (Artificial General Intelligence) or just narrow AI) has gotten a lot of attention especially since Geoffrey Hinton ended his relationship with Google so he could speak about it. He and other signed a short statement published on the site of the Center for AI Safety,

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

The existential question only become relevant if one believes, as many do, that there is considerable risk that AI research and development is moving so fast that it may soon achieve some level of generality at which point such an AGI could begin act in unpredictable and dangerous ways. Alternatively people could misuse such powerful AGIs to harm us. Open Philanthropy is one group that is focused on Potential Risks form Advanced AI. They could be classed as an organization with a longtermism view, a view that it is important to ethics (and philanthropy) to consider long-term issues.

Advances in AI could lead to extremely positive developments, but could also potentially pose risks from intentional misuse or catastrophic accidents.

Others have called for a Manhattan Project for AI Safety. There are, of course, those (including me) that feel that this is distracting from the immediate unintended effects of AI and/or that there is little existential danger for the moment as AGI is decades off. The cynic in my also wonders how much the distraction is intentional as it both hypes the technology (its dangerous therefore it must be important) or justifies ignoring the stubborn immediate problems like racist bias in the training data.

Kelsey Piper has in the meantime published A Field Guide to AI Safety.

The question still remains whether AI is dangerous enough to merit the sort of ethical attention that nuclear power, for example, has recieved.