Why are women philosophers often erased from collective memory?

The history of ideas still struggles to remember the names of notable women philosophers. Mary Hesse is a salient example

Aeon has an important essay on Why are women philosophers often erased from collective memory? The essay argues that a number of important women philosophers have been lost (made absent) despite their importance including Mary Hesse. (You can see her Models and Analogies in Science through the Internet Archive.)

I read this after reading a chapter from Sara Ahmed’s Living a Feminist Life where Ahmed talks about citation practices and how disciplines exclude diverse work in different ways. She does a great job of confronting the various excuses people have for their bleached white citations. Poking around I find others have written on this including Victor Ray in Inside Higher Ed in an essay on The Racial Politics of Citation who references Richard Delgado’s The Imperial Scholar: Reflections on a Review of Civil Rights Literature from 1984.

What should be done about this? Obviously I’m not the best to suggest remedies, but here are some of the ideas that show up:

  • We need to commit to take the time to look at the works we read on a subject or for a project and to ask whose voice is missing. This shouldn’t be done at the end as a last minute fix, but during the ideation phase.
  • We should gather and confront data on our citational patterns from our publications. Knowing what you have done is better than not knowing.
  • We need to do the archaeological work to find and recover marginalized thinkers who have been left out and reflect on why they were left out. Then we need to promote them in teaching and research.
  • We should be willing to call out grants, articles, and proposals we review when it could make a difference.
  • We need to support work to translate thinkers whose work is not in English to balance the distribution of influence.
  • We need to be willing to view our field and its questions very differently.

Lost Gustav Klimt Paintings Destroyed in Fire Digitally Restored (by AI)

Black and White and AI Coloured versions of Philosophy
Philosophy by Klimt

Google Arts & Culture launched a hub for all things Gustav Klimt today, which include digital restorations of three lost paintings.

ARTnews, among other places reports that Lost Gustav Klimt Paintings Destroyed in Fire Digitally RestoredThe three faculties (Medicine, Philosophy, and Jurisprudence) painted for the University of Vienna were destroyed in a fire leaving only black and white photographs. Now Google has helped recreate what the three paintings might have looked like using AI as part of a Google Arts and Culture site on Klimt. You can read about the history of the three faculties here.

Whether in black and white, or in colour, the painting of Philosophy (above) is stunning. The original in colour would have been stunning, especially as it was 170 by 118 inches. Philosophy is represented by the Sphinx-like figure merging with the universe. To one side is a stream of people from the young to the old who hold their heads in confusion. At the bottom is a woman, comparable to the woman in the painting of Medicine, who might be an inspired philosopher looking through us.

The ethics of regulating AI: When too much may be bad

By trying to put prior restraints on the release of algorithms, we will make the same mistake Milton’s censors were making in trying to restrict books before their publication. We will stifle the myriad possibilities inherent in an evolving new technology and the unintended effects that it will foster among new communities who can extend its reach into novel and previously unimaginable avenues. In many ways it will defeat our very goals for new technology, which is its ability to evolve, change and transform the world for the better.

3 Quarks Daily has another nice essay on ethics and AI by Ashutosh Jogalekar. This one is about The ethics of regulating AI: When too much may be bad. The argument is that we need to careful about regulating algorithms preemptively. As quote above makes clear he makes three related points:

  • We need to be careful censoring algorithms before they are tried.
  • One reason is that it is very difficult to predict negative or positive outcomes of new technologies. Innovative technologies almost always have unanticipated effects and censoring them would limit our ability to learn about the effects and benefit from them.
  • Instead we should manage the effects as they emerge.

I can imagine some responses to this argument:

  • Unanticipated effects are exactly what we should be worried about. The reason for censoring preemptively is precisely to control for unanticipated effects. Why not encourage better anticipation of effects.
  • Unanticipated effects, especially network effects, often only manifest themselves when the technology is used at scale. By then it can be difficult to roll back the technology. Precisely when there is a problem is when we can’t easily change the way the technology is used.
  • One person’s unanticipated effect is another’s business or another’s freedom. There is rarely consensus about the effect of effects.

I also note how Jogalekar talks about the technology as if it had agency. He talks about the technologies ability to evolve. Strictly speaking the technology doesn’t evolve, but our uses do. When it comes to innovation we have to be careful not to ascribe agency to technology as if it was some impersonal force we can resist.

InspiroBot

I am an artificial intelligence dedicated to generating unlimited amounts of unique inspirational quotes for endless enrichment of pointless human existence.

InspiroBot is a web site with an AI bot that produces inspiring quotes and puts them on images, sometimes with hilarious results. You can generate new quotes over and over and the system, while generating them also interacts with you saying things like “You’re my favorite user!” (I wonder if I’m the only one to get this or if the InspiroBot flatters all its users.)

It also has a Mindfulness mode where is just keeps on putting up pretty pictures and playing meditative music while reading out “insprirations.” Very funny as in “Take in how your bodily orifices are part of heaven…”

While the InspiroBot may seem like toy, there is a serious side to this. First, it is powered by an AI that generates plausible inspirations (most of the time.) Second, it shows how a model of how we might use AI as a form of prompt – generating media that provokes us. Third, it shows the deep humour of current AI. Who can take it seriously.

Thanks to Chelsea for this.

Ethics in the Age of Smart Systems

Today was the third day of a symposium I helped organize on Ethics in the Age of Smart Systems. For this we experimented with first organizing a “dialogue” or informal paper and discussion on a topic around AI ethics once a month. These led into the symposium that ran over three days. We allowed for an ongoing conversation after the formal part of the event each day. We were also lucky that the keynotes were excellent.

  • Veena Dubal talked about Proposition 22 and how it has created a new employment category of those managed by algorithm (gig workers.) She talked about how this is a new racial wage code as most of the Uber/Lyft workers are people of colour or immigrants.
  • Virginia Dignum talked about how everyone is announcing their principles, but these principles are enough. She talked about how we need standards; advisory panels and ethics officers; assessment lists (checklists); public awareness; and participation.
  • Rafael Capurro gave a philosophical paper about the smart in smart living. He talked about metis (the Greek for cunning) and different forms of intelligence. He called for hesitation in the sense of taking time to think about smart systems. His point was that there are time regimes of hype and determinism around AI and we need to resist them and take time to think freely about technology.

Can GPT-3 Pass a Writer’s Turing Test?

While earlier computational approaches focused on narrow and inflexible grammar and syntax, these new Transformer models offer us novel insights into the way language and literature work.

The Journal of Cultural Analytics has a nice article that asks  Can GPT-3 Pass a Writer’s Turing Test? They didn’t actually get access to GPT-3, but did test GPT-2 extensively in different projects and they assessed the output of GPT-3 reproduced in an essay on Philosophers On GPT-3. At the end they marked and commented on a number of the published short essays GPT-3 produced in response to the philosophers. They reflect on how would decide if GPT-3 were as good as an undergraduate writer.

What they never mention is Richard Powers’ novel Galatea 2.2 (Harper Perennial, 1996). In the novel an AI scientist and the narrator set out to see if they can create an AI that could pass a Masters English Literature exam. The novel is very smart and has a tragic ending.

Update: Here is a link to Awesome GPT-3 – a collection of links and articles.

Philosophers On GPT-3

GPT-3 raises many philosophical questions. Some are ethical. Should we develop and deploy GPT-3, given that it has many biases from its training, it may displace human workers, it can be used for deception, and it could lead to AGI? I’ll focus on some issues in the philosophy of mind. Is GPT-3 really intelligent, and in what sense? Is it conscious? Is it an agent? Does it understand?

On the Daily Nous (news by and for philosophers) there is a great collection of short essays on OpenAI‘s recently released API to GPT-3, see Philosophers On GPT-3 (updated with replies by GPT-3). And … there is a response from GPT-3. Some of the issues raised include:

Ethics: David Chalmers raises the inevitable ethics issues. Remember that GPT-2 was considered so good as to be dangerous. I don’t know if it is brilliant marketing or genuine concern, but OpenAI is continuing to treat this technology as something to be careful about. Here is Chalmers on ethics,

GPT-3 raises many philosophical questions. Some are ethical. Should we develop and deploy GPT-3, given that it has many biases from its training, it may displace human workers, it can be used for deception, and it could lead to AGI? I’ll focus on some issues in the philosophy of mind. Is GPT-3 really intelligent, and in what sense? Is it conscious? Is it an agent? Does it understand?

Annette Zimmerman in her essay makes an important point about the larger justice context of tools like GPT-3. It is not just a matter of ironing out the biases in the language generated (or used in training.) It is not a matter of finding a techno-fix that makes bias go away. It is about care.

Not all uses of AI, of course, are inherently objectionable, or automatically unjust—the point is simply that much like we can do things with words, we can do things with algorithms and machine learning models. This is not purely a tangibly material distributive justice concern: especially in the context of language models like GPT-3, paying attention to other facets of injustice—relational, communicative, representational, ontological—is essential.

She also makes an important and deep point that any AI application will have to make use of concepts from the application domain and all of these concepts will be contested. There are no simple concepts just as there are no concepts that don’t change over time.

Finally, Shannon Vallor has an essay that revisits Hubert Dreyfus’s critique of AI as not really understanding.

Understanding is beyond GPT-3’s reach because understanding cannot occur in an isolated behavior, no matter how clever. Understanding is not an act but a labor.

It’s the (Democracy-Poisoning) Golden Age of Free Speech

And sure, it is a golden age of free speech—if you can believe your lying eyes. Is that footage you’re watching real? Was it really filmed where and when it says it was? Is it being shared by alt-right trolls or a swarm of Russian bots? Was it maybe even generated with the help of artificial intelligence?

There have been a number of stories bemoaning what has become of free speech. Fore example, WIRED has one title, It’s the (Democracy-Poisoning) Golden Age of Free Speech by Zeynep Tufekci (Jan. 16, 2020). In it she argues that access to an audience for your speech is no longer a matter of getting into centralized media, it is now a matter of getting attention. The world’s attention is managed by a very small number of platforms (Facebook, Google and Twitter) using algorithms that maximize their profits by keeping us engaged so they can sell our attention for targeted ads.

Continue reading It’s the (Democracy-Poisoning) Golden Age of Free Speech

Artificial intelligence: Commission takes forward its work on ethics guidelines

The European Commission has announced the next step in its Artificial Intelligence strategy. See Artificial intelligence: Commission takes forward its work on ethics guidelines. The appointed a High-Level Expert Group in June of 2018. This group has now developed Seven essentials for achieving trustworthy AI:

Trustworthy AI should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

The next step has now been announced and that is a pilot phase that tests these essentials with stakeholders. The Commission also wants to cooperate with “like-minded partners” like Canada.

What would it mean to participate in the pilot?

Ethicists are no more ethical than the rest of us, study finds

When it comes to the crucial ethical question of calling one’s mother, most people agreed that not doing so was a moral failing.

Quartz reports on a study in Philosophical Psychology that Ethicists are no more ethical than the rest of us, study finds — Quartz. While one wonders how one can survey how ethical someone is, this is nonetheless a believable result. The contemporary university is structured deliberately not to be a place to change people’s morals, but to educate them. When we teach ethics we don’t assess or grade the morality of the student. Likewise, when we hire, promote, and assess the ethics of a philosophy professor we also don’t assess their personal morality. We assess their research, teaching and service record, all of which can be burnished without actually being ethical. There is, if you will, a professional ethic that research and teaching should not be personal, but be detached.

A focus on the teaching and learning of ethics over personal morality is, despite the appearance of hypocrisy, a good thing. We try to create in the university, in the class, and in publications, an openness to ideas, whoever they come from. By avoiding discussing personal morality we try to create a space where people of different views can enter into dialogue about ethics. Imagine what it would be like if it were otherwise? Imagine if my ethics class was about converting students to some standard of behaviour. Who would decide what that standard was? The ethos of professional ethics is one that emphasizes dialogue over action, history over behaviour, and ethical argumentation over disposition. Would it be ethical any other way?