Lisa: Steve Jobs’ sabotage and Apple’s secret burial

Who remembers the Lisa? The Verge has a nice short documentary on the Lisa: Steve Jobs’ sabotage and Apple’s secret burial. The Lisa, named after Jobs’ daughter and released in 1983, was the first Apple with a graphical user interface. Alas it was too expensive (almost $10K USD at the time) and was eventually superseded by the Macintosh that came out in 1994 despite being technically superior.

The documentary is less about the Lisa than the end of the Lisa including an interview with Bob Cook who sold remaindered and used Lisa’s after they were discontinued thanks to a deal with Apple until Apple decided to bury them all in a landfill in Utah. (Which reminds me of the Atari video game cartridge burial of 1983.) The documentary is also, as every Apple story is, about Steve Jobs and his return to Apple in the late 1990s which led to its turnaround into the successful company it is now. Was it Jobs who wanted to bury the Lisa?

Statement on AI Risk

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

The Center for AI Safety has issued a very short Statement on AI Risk (see sentence above.) This has been signed by the likes of Yoshua Bengio and Geoffrey Hinton. I’m not sure if it is an alternative to the much longer Open Letter, but it focuses on the warning without any prescription as to what we should do. The Open Letter was criticized many in the AI community, so perhaps CAIS was trying to find wording that could bring together “AI Scientists” and “Other Notable Figures.”

I personally find this alarmist. I find myself less and less impressed with ChatGPT as it continues to fabricate answers of little use (because they are false.) I tend to agree with Elizabeth Renieris who is quoted in this BBC story on Artificial intelligence could lead to extinction, experts warn to the effect that there are a lot more pressing immediate issues with AI to worry about. She says,

“Advancements in AI will magnify the scale of automated decision-making that is biased, discriminatory, exclusionary or otherwise unfair while also being inscrutable and incontestable,” she said. They would “drive an exponential increase in the volume and spread of misinformation, thereby fracturing reality and eroding the public trust, and drive further inequality, particularly for those who remain on the wrong side of the digital divide”.

All the concern about extinction has me wondering if this isn’t a way of hyping AI to make everyone one and every AI business more important. If there is an existential risk then it must be a priority, and if it is a priority then we should be investing in it because, of course, the Chinese are. (Note that the Chinese have actually presented draft regulations that they will probably enforce.) In other words, the drama of extinction could serve the big AI companies like OpenAI, Microsoft, Google, and Meta in various ways:

  • The drama could convince people that there is real disruptive potential in AI so they should invest now! Get in before it is too late.
  • The drama could lead to regulation which would actually help the big AI companies as they have the capacity to manage regulation in ways that small startups don’t. The big will get bigger with regulation.

I should stress that this is speculation. I probably shouldn’t be so cynical. Instead lets look to what we can do locally.

The Institution of Knowledge

Last week the Kule Institute for Advanced Study, the colab and the Dunlop Art Gallery organized and exhibit/symposium on The Institution of KnowledgeThe exhibit featured artists reflecting on knowledge and institutions and the symposium including performance lectures, panels and talks.

I gave a talk on “The Knowledge We Bear” that looked at four of the main structures that discipline the ways we bear knowledge in the university as institution. I also moderated a dialogue between Kevin Kee and Jacques Beauvais.

The three days were extraordinary thanks to the leadership of my co-organizer Natalie Loveless. I learned a lot about the weaving of research and creation together.

In many ways this was my last major initiative as Director of KIAS. On July 1st Michael O’Driscoll will take over. It was a way of reflecting on institutes and what they can do with others. I’m grateful to all those who participated.

Ricordando Dino Buzzetti, co-fondatore e presidente onorario dell’AIUCD

The AIUCD (Association for Humanistic Informatics and Digital Culture) have posted a nice blog entry with memories of Dino Buzetti (in Italian). See Ricordando Dino Buzzetti, co-fondatore e presidente onorario dell’AIUCD – Informatica Umanistica e Cultura Digitale: il blog dell’ AIUCD. 

Dino was the co-founder and honorary president of the AIUCD. He was one of the few other philosophers in the digital humanities. I last saw him in Tuscany and wish I had taken more time to talk with him about his work. His paper “Towards an operational approach to computational text analysis” is in the recent collection I helped edit On Making in the Digital Humanities.

Institutions and Knowledge

University of Alberta is home to 18 faculties and dozens of research centres and institutes.

Institutions like the University of Alberta are typically divided into colleges, faculties and then departments. The U of Alberta has recently reorganized around three major Colleges that correspond to the three major granting councils in Canada. See Colleges + Faculties | University of AlbertaWe then have centres and institutes that attempt to bridge the gaps created between units. The Kule Institute for Advanced Study, for example, supports interdisciplinary research and intersectoral research in an attempt to span the gaps between departments.

What are the institutional structures that guide and constrain knowledge creation and sharing at a University? Here is a rough list:

  • The annual faculty performance assessment process has a major impact on the knowledge created by faculty. University processes and standards for assessment influence what we do or not. Typically research is what is valued and that sets the tone. The tenure-track process does free one eventually to be able to do research that isn’t understood, but one still gets regular feedback that can influence directions one takes.
  • The particular division of a University into departments structures what knowledge one is expected to create and teach. The divisions are a topology of what is considered important fields of knowledge even if there are centres and institutes that cross boundaries. These divisions into departments and faculties have history; they are not fixed, but neither are they fluid. They come and go. A university is too large to manage without divisions, but divisions can lead to silos that don’t communicate as much.
  • What one can teach and is assigned to teach has a dramatic effect on  the knowledge one shares and thinks about. Even if one supposedly knows what one teaches, teaching, especially at the graduate level, encourages sustained reflection on some issues. Teaching is also one of the most important ways knowledge is replicated and shared.
  • Knowledge infrastructure like the library and available labs make possible or constrain what one can do. If one doesn’t have access to publications in a field it limits one’s ability to study it. This is why libraries are so important to research in some fields. Likewise, if you don’t have access to the right sort of lab and research equipment you can’t do research. The ongoing competition for infrastructure resources from space to book is part of the shifting politics of knowledge.
  • Universities will also have different incentives and support for research from small grants to grant writing staff. Research services has programs, staff and so on that can support new knowledge creation or not.

Then there are structures that are outside the university like the granting councils, but that is for another blog post.

 

Auto-GPT

An experimental open-source attempt to make GPT-4 fully autonomous. – Auto-GPT/README.md at master · Torantulino/Auto-GPT

From a video on 3 Quarks Daily on whether ChatGPT can prompt itself I discovered, Auto-GPT. Auto-GPT is powered by GPT-4. You can describe a mission and it will try to launch tasks, assess them, and complete the mission. Needless to say it was inevitable that someone would find a way to use ChatGPT or one of its relatives to try to complete complicated jobs including taking over the world, as Chaos-GPT claims to want to do (using Auto-GPT.)

How long will it be before someone figures out how to use these tools to do something truly nasty? I give it about 6 months before we get stories of generative AI being used to systematically harass people, or find information on how to harm people, or find ways to waste resources like the paperclip maximizer. Is it surprising that governments like Italy have banned ChatGPT?

 

U of A computing scientists work with Japanese researchers on virtual reality game to get people out of their seats

U of A computing scientists work with Japanese researchers to refine a virtual and mixed reality video game that can improve motor skills for older adults and sedentary people.

The Folio of the University of Alberta published a story about a trip to Japan that I and others embarked on, U of A computing scientists work with Japanese researchers on virtual reality game to get people out of their seats. Ritsumeikan invited us to develop research collaborations around gaming, language and artificial intelligence. Our visit was a chance to further the collaborations, like the one my colleagues Eleni Stroulia and Victor Fernandez Cervantes are developing with Thawmas Ruck around games for older adults. This inter-university set of collaborations build on projects I was involved in going back to 2011, including a conference (Replaying Japan) and a journal, the Journal of Replaying Japan.

The highlight was the signing of a Memorandum Of Understanding by the two presidents (of U of A and Ritsumeikan). I was also involved as was Professor Nakamura. May the collaboration thrive.

2023 Annual Public Lecture in Philosophy

Last week I gave the 2023 Annual Public Lecture in Philosophy. You can Watch a Recording here. The talk was on The Eliza Effect: Data Ethics for Machine Learning.

I started the talk with the case of Kevin Roose’s interaction with Sydney (Microsoft’s name for Bing Chat) where it ended up telling Roose that it loved him. From there I discussed some of the reasons we should be concerned with the latest generation of chatbots. I then looked at the ethics of LAION-5B as an example of how we can audit the ethics of projects. I ended with some reflections on what an ethics of AI could be.

Pause Giant AI Experiments: An Open Letter

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

The Future of Life Institute is calling on AI labs to pause with a letter signed by over 1000 people (including myself), Pause Giant AI Experiments: An Open Letter – Future of Life Institute. The letter asks for a pause so that safety protocols can be developed,

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.

This letter to AI labs follows a number of essays and opinions that maybe we are going too fast and should show restraint. This in the face of the explosive interest in large language models after ChatGPT.

  • Gary Marcus wrote an essay in his substack on “AI risk ≠ AGI risk” arguing that just because we don’t have AGI doesn’t mean there isn’t risk associated with the Mediocre AI systems we do have.
  • Yuval Noah Harari has an opinion in the New York Times with the title, “You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills” where he talks about the dangers of AIs manipulating culture.

We have summoned an alien intelligence. We don’t know much about it, except that it is extremely powerful and offers us bedazzling gifts but could also hack the foundations of our civilization. We call upon world leaders to respond to this moment at the level of challenge it presents. The first step is to buy time to upgrade our 19th-century institutions for a post-A.I. world and to learn to master A.I. before it masters us.

It is worth wondering whether the letter will have an effect, and if it doesn’t, why we can’t collectively slow down and safely explore AI.

The Story of Class Struggle, America’s Most Popular Marxist Board Game

Released in 1978, a socialist alternative to Monopoly sold over 200,000 copies and was translated into multiple languages.

Mental Floss has a nice piece of floss with The Story of Class Struggle, America’s Most Popular Marxist Board Game. Class Struggle, the board game, was developed by a political science professor, Bertell Ollman, who wanted an alternative to Monopoly, which ironically was based on the Landlord Game which had been originally designed to show the evils of property. Class Struggle was sold to Avalon Hill who eventually discontinued it. It looks like you can find copies on eBay.