Ken Kesey and the Rush to Deinstitutionalization

Photo of actor Jack Nicholson and director Milos Foreman on set of One Flew Over the Cuckoo's Nest.
Jack Nicholson and director Milos Foreman on set of One Flew Over the Cuckoo’s Nest

Whatever the literary strengths of One Flew Over the Cuckoo’s Nest, the book has done much to harm both the mentally ill and their communities.

This May the Kule Institute is organizing a hybrid exhibit/symposium on the Institution of Knowledge. We are bringing together a group of artists and thinkers to raise and address questions about institutional structures and knowledge. One question that the small group I’m part of discussed this week as the question of deinstitutionalization and the view, best captured by Ken Kesey in One Flew Over the Cuckoo’s Nest that asylums as institutions were sites that did more harm than good. Stephen Eide has a nice article about this, Ken Kesey and the Rush to Deinstitutionalization (Quilette, Nov. 14, 2022).

There are a number of aspects to the issue. The first thing to note is that the deinstitutionalization of people with serious mental health issues didn’t work as imagined. It was not the freeing of an oppressed constituency back to the community where the new drugs could help them integrate and get on with their lives. There wasn’t really a community that wanted them other than the street and many ended up in the very institutions asylums were meant to replace – prisons. Stephen Eide’s book Homelessness in America traces the effects of deinstitutionalization, changes in vagrancy laws, and the “cleaning” up of slums on homelessness leading to the problem as we see it today.

But what about the idea of deinstitutionalization? Important to this idea would be Foucault, changes in psychiatry and how the discipline conceives of the role of medicine (and its institutions), and changes in public policy and what jurisdictions try to do with institutions.

One aspect of the issues that we forget if we think of institutions as bureaucracy is the built presence of institutions. From Jefferson’s design of the campus of the University of Virginia to Olmstead’s asylum landscapes, architects have shaped our imagination and the literal structures of certain types of institutions. This raises the question of what new types of institutions might be in being designed?

The Royal Game of Ur: Play the Oldest Board Game on Record – The New York Times

For 4,600 years, a mysterious game slept in the dust of southern Iraq, largely forgotten. The passion of a museum curator and the hunger of young Iraqis for their cultural history may bring it back.

The New York Times has a story on The Royal Game of Ur: Play the Oldest Board Game on Record. A curator at the British Museum, Irving Finkel, connected the translation of a tablet with the rules with an ancient board game of which there were copies in museums (see picture above). More recently the game has been reintroduced into Iraq so that people can rediscover their ludic heritage.

The nice thing about the NYTimes article, beside the video of Finkel who has an amazing beard, is that they include a PDF that you can download and print to learn to play the game.

The article and Finkel’s video talk highlight how influential a game can be – how a set of rules can be a meme that helps rediscover a game.

Elon Musk Reinstates Trump’s Twitter Account

Mr. Musk, who had asked Twitter users about whether to bring back the former president to the service, said, “The people have spoken.”

Just discovered that Elon Musk Reinstates Trump’s Twitter Account. I think it is time to go then. I’m archiving my twitter site and leaving.

For those that care I’m now at @geoffreyrockwell@masto.ai though I haven’t gotten the hang of Mastodon yet. Must take some time to read and watch.

Unitron Mac 512: A Contraband Mac 512K from Brazil

From a paper on postcolonial computing I learned about the Unitron Mac 512: A Contraband Mac 512K from Brazil. For a while Brazil didn’t allow the importation of computers (so as to kickstart their own computer industry.) Unitron decided to reverse engineer the Mac 512K, but Apple put pressure on Brazil and the project was closed down. At least 500 machines were built and I guess some are still in circulation.

The article is Philip, K., et al. (2010). “Postcolonial Computing: A Tactical Survey.” Science Technology Human Values. 37(1).

Though Apple had no intellectual property protection for the Macintosh in Brazil, the American corporation was able to pressure government and other economic actors within Brazil to reframe Unitron’s activities, once seen as nationalist and anti-colonial, as immoral piracy.

Character.AI: Dialogue on AI Ethics

Part of image generated from text, “cartoon pencil drawing of ethics professor and student talking” by Midjourney, Oct. 5, 2022.

Last week I created a character on Character.AI, a new artificial tool created by some ex-Google engineers who worked on LaMDA, the language model from Google that I blogged about before.

Character.AI, which is now down for maintenance due to all the users, lets you quickly create a character and then enter into dialogue with it. It actually works quite well. I created “The Ethics Professor” and then wrote a script of questions that I used to engage the AI character. The dialogue is below.

From Bitcoin to Stablecoin: Crypto’s history is a house of cards

The wild beginnings, crazy turns, colorful characters and multiple comebacks of the crypto world

The Washington Post has a nice illustrated history of crypto, From Bitcoin to Stablecoin: Crypto’s history is a house of cards. They use a deck of cards as a visual metaphor and a graph of the ups and downs of crypto. I can’t help thinking that crypto is going to go up again, but when and in what form?

For that matter, where is Ruja Ignatova?

‘I saw the possibility of what could be done – so I did it’: revolutionary video game The Hobbit turns 40 | Games | The Guardian

The developer of the text-adventure game on how, at 20, she overcame 1980s misogyny to turn a Tolkien book into one of the most groundbreaking titles in the gaming canon

The Guardian has a story on Veronika Megler who developed the innovative text (and image) adventure game The Hobbit (1982), ‘I saw the possibility of what could be done – so I did it’: revolutionary video game The Hobbit turns 40. She went on to get a PhD and now is principal data scientist at Amazon Web Services!

You can now play The Hobbit on the Internet Archive.

Issues around AI text to art generators

A new art-generating AI system called Stable Diffusion can create convincing deepfakes, including of celebrities.

TechCrunch has a nice discussion of Deepfakes for all: Uncensored AI art model prompts ethics questions. The relatively sudden availability of AI text to art generators has provoked discussion on the ethics of creation and of large machine learning models. Here are some interesting links:

It is worth identifying some of the potential issues:

  • These art generating AIs may have violated copyright in scraping millions of images. Could artists whose work has been exploited sue for compensation?
  • The AIs are black boxes that are hard to query. You can’t tell if copyrighted images were used.
  • These AIs could change the economics of illustration. People who used to commission and pay for custom art for things like magazines, book covers, and posters, could start just using these AIs to save money. Just as Flickr changed the economics of photography, MidJourney could put commercial illustrators out of work.
  • We could see a lot more “original” art in situations where before people could not afford it. Perhaps poster stores could offer to generate a custom image for you and print it. Get your portrait done as a cyberpunk astronaut.
  • The AIs could reinforce visual bias in our visual literacy. Systems that always see Philosophers as old white guys with beards could limit our imagination of what could be.
  • These could be used to create pornographic deepfakes with people’s faces on them or other toxic imagery.

Social Sciences & Humanities Open Marketplace

Discover new resources for your research in Social Sciences and Humanities: tools, services, training materials and datasets, contextualised.

I’ve been experimenting with the Social Sciences & Humanities Open Marketplace. The Marketplace was developed by three European Research Infrastructures, Dariah-EU, Clarin, and CESSDA. I’m proud to say that TAPoR contributed data to the Marketplace. It is great to have such a directory service for finding things!

EU Artificial Intelligence Act

With the introduction of the Artificial Intelligence Act, the European Union aims to create a legal framework for AI to promote trust and excellence. The AI Act would establish a risk-based framework to regulate AI applications, products and services. The rule of thumb: the higher the risk, the stricter the rule. But the proposal also raises important questions about fundamental rights and whether to simply prohibit certain AI applications, such as social scoring and mass surveillance, as UNESCO has recently urged in the Recommendation on AI Ethics, endorsed by 193 countries. Because of the significance of the proposed EU Act and the CAIDP’s goal to protect fundamental rights, democratic institutions and the rule of law, we have created this informational page to provide easy access to EU institutional documents, the relevant work of CAIDP and others, and to chart the important milestones as the proposal moves forward. We welcome your suggestions for additions. Please email us.

The Center for AI and Digital Policy (CAIDP) has a good page on the EU Artificial Intelligence Act with links to different resources. I’m trying to understand this Act the network of documents related to it, as the AI Act could have a profound impact on how AI is regulated, so I’ve put together some starting points.

First, the point about the potential influence of the AI Act is made in a slide by Giuliano Borter, a CAIDP Fellow. The slide deck is a great starting point that covers key points to know.

Key Point #1 – EU Shapes Global Digital Policy

• Unlike OECD AI Principles, EU AI legislation will have legal force with consequences for businesses and consumers

• EU has enormous influence on global digital policy (e.g. GDPR)

• EU AI regulation could have similar impact

Borter goes on to point out that the Proposal is based on a “risk-based approach” where the higher the risk the more (strict) regulation. This approach is supposed to provide legal room for innovative businesses not working on risky projects while controlling problematic (riskier) uses. Borter’s slides suggest that an unresolved issue is mass surveillance. I can imagine that there is the danger that data collected or inferred by smaller (or less risky) services is aggregated into something with a different level of risk. There are also issues around biometrics (from face recognition on) and AI weapons that might not be covered.

The Act is at the moment only a proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) – the Proposal was launched in April of 2021 and all sorts of entities, including the CAIDP are suggesting amendments.

What was the reason for this AI Act? In the Reasons and Objective opening to the Proposal they write that “The proposal is based on EU values and fundamental rights and aims to give people and other users the confidence to embrace AI-based solutions, while encouraging businesses to develop them.” (p. 1) You can see the balancing of values, trust and business.

But I think it is really the economic/business side of the issue that is driving the Act. This can be seen in the Explanatory Statement at the end of the Report on artificial intelligence in a digital age (PDF) from the European Parliament Special Committee on Artificial Intelligence in a Digital Age (AIDA).

Within the global competition, the EU has already fallen behind. Significant parts of AI innovation and even more the commercialisation of AI technologies take place outside of Europe. We neither take the lead in development, research or investment in AI. If we do not set clear standards for the human-centred approach to AI that is based on our core European ethical standards and democratic values, they will be determined elsewhere. The consequences of falling further behind do not only threaten our economic prosperity but also lead to an application of AI that threatens our security, including surveillance, disinformation and social scoring. In fact, to be a global power means to be a leader in AI. (p. 61)

The AI Act may be seen as way to catch up. AIDA makes the supporting case that “Instead of focusing on threats, a human-centric approach to AI based on our values will use AI for its benefits and give us the competitive edge to frame AI regulation on the global stage.” (p. 61) The idea seems to be that a values based proposal that enables regulated responsible AI will not only avoid risky uses, but create the legal space to encourage low-risk innovation. In particular I sense that there is a linkage to the Green Deal – ie. that AI is being a promising technology that could help reduce energy use through smart systems.

Access Now also has a page on the AI Act. They have a nice clear set of amendments that show where some of the weaknesses in the AI Act could be.