In Ukraine War, A.I. Begins Ushering In an Age of Killer Robots

Driven by the war with Russia, many Ukrainian companies are working on a major leap forward in the weaponization of consumer technology.

The New York Times has an important story on how, In Ukraine War, A.I. Begins Ushering In an Age of Killer Robots. In short, the existential threat of the overwhelming Russian attack is creating a situation where Ukraine is developing a home-grown autonomous weapons industry that repurposes consumer technologies. Not only are all sorts of countries testing AI powered weapons in Ukraine, the Ukrainians are weaponizing cheap technologies and, in the process, removing a lot of the guardrails.

The pressure to outthink the enemy, along with huge flows of investment, donations and government contracts, has turned Ukraine into a Silicon Valley for autonomous drones and other weaponry.

There isn’t necessarily any “human in the loop” in the cheap systems they are developing. One wonders how the development of this industry will affect other conflicts. Could we see a proliferation of terrorist drone attacks put together following plans circulating on the internet?

Why the pope has the ears of G7 leaders on the ethics of AI

Pope Francis is leaning on thinking of Paolo Benanti, a friar adept at explaining how technology can change world

The Guardian has some good analysis on Why the pope has the ears of G7 leaders on the ethics of AI | Artificial intelligence (AI). The PM of Italy, Meloni, invited the pope to address the G7 leaders on the issue of AI. I blogged about this here. It is worth pointing out that this is not the first time the Vatican has intervened on the issue of AI ethics. Here is a short timeline:

  • In 2020 a bunch of Catholic organizations and industry heavyweights sign the Rome Call (Call for AI Ethics). The Archbishop of Canterbury just signed.
  • In 2021 they create the RenAIssance Foundation building on the Rome Call. It’s scientific director is Paolo Benati, a charismatic Franciscan monk, professor, and writer on religion and technology. He is apparently advising both Meloni and the pope and he coined the term “algo-ethics”. Most of his publications are in Italian, but there is an interview in English. He is also apparently on the OECD’s expert panel now.
  • 2022 Benati publishes Human in the Loop: Decisioni umane e intelligenze artificiali (Human in the Loop: Human Decisions and Artificial Intelligences) which is about the importance of ethics to AI and the human in ethics.
  • 2024 Meloni invites the Pope to address the G7 leaders gathered in Italy on AI.

Rebind | Read Like Never Before With Experts, AI, & Original Content

Experience the next chapter in reading with Rebind: the first AI-reading platform. Embark on expert-guided journeys through timeless classics.

From a NYTimes story I learned about John Kaag’s new initiative Rebind | Read Like Never Before With Experts, AI, & Original Content. The philosophers Kaag and Clancy Martin have teamed up with an investor to start a company that create AI enhanced “rebindings” of classics. They work with out of copyright book and then pay someone to interpret or comment on the book. The commentary is then used to train an AI with whom you can dialogue as you go through the book. The end result (which I am on the waitlist to try) will be a reading experience enhanced by interpretative videos and chances to interact. It answers Plato’s old critique of text that you can ask questions of it. Now you can.

This reminds me of an experiment by Schwitzgebel, Strasser, and Crosby who created a Daniel Dennett chatbot. Here you can see SChwitzgebel’s reflections on the project.

This project raised ethical issues like whether it was ethical to simulate a living person. In this case they asked for Dennett’s permission and didn’t give people direct access to the chatbot. With the announcements about Apple Intelligence it looks like Apple may provide an AI that is part of the system that will have access to your combined files so as to help with search and to help you talk with yourself. Internal dialogue, of course, is the paradigmatic manifestation of consciousness. Could one import one or two thinkers to have a multi-party dialogue about ones thinking over time … “What do you think Plato; should I write another paper about ethics and technology?”

Surgeon General: Social Media Platforms Need a Health Warning

It’s time for decisive action to protect our young people.

The New York Times is carrying an opinion piece by Vivek H. Murthy, the Surgeon General of the USA arguing that  Social Media Platforms Need a Health Warning. He argues that we have a youth mental health crisis and “social media has emerged as an important contributor.” For this reason he wants social media platforms to carry a warning label similar to cigarettes, something that would take congressional action.

He has more advice in a Social Media and Youth Mental Health advisory including protecting youth from harassment and problematic content. The rhetoric is to give parents support:

There is no seatbelt for parents to click, no helmet to snap in place, no assurance that trusted experts have investigated and ensured that these platforms are safe for our kids. There are just parents and their children, trying to figure it out on their own, pitted against some of the best product engineers and most well-resourced companies in the world.

Social media has gone from a tool of democracy (remember Tahir Square?) to a info-plague in a little over ten years. Just as it is easy to seek salvation in technology, and the platforms encourage such hype, it is also easy to blame it. The Surgeon General’s sort of advice will get broad support, but will anything happen? How long will it take regulation and civil society to box the platforms into civil business? The Surgeon General calls how well we protect our children a “moral test.” Indeed.

Pope to G7: AI is ‘neither objective nor neutral’

Vatican News has a full report on the Pope’s address to the G7 leaders on Artificial Intelligence. In the address the Pope called AI “a true cognitive-industrial revolution” that could lead to “complex epochal transformations”. The full address is available (in various translations) here.

After all, we cannot doubt that the advent of artificial intelligence represents a true cognitive-industrial revolution, which will contribute to the creation of a new social system characterised by complex epochal transformations. For example, artificial intelligence could enable a democratization of access to knowledge, the exponential advancement of scientific research and the possibility of giving demanding and arduous work to machines. Yet at the same time, it could bring with it a greater injustice between advanced and developing nations or between dominant and oppressed social classes, raising the dangerous possibility that a “throwaway culture” be preferred to a “culture of encounter”.

Partecipazione del Santo Padre Francesco al G7 a Borgo Egnazia, 14.06.2024

The Pope makes a number of interesting points starting with a point about how tool making is a looking outward to the environment – a techno-human condition that is part of being human. It is a particular form of openness to the environment that can lead to good or be corrupted which is why ethics are important. “To speak of technology is to speak of what it means to be human and thus of our singular status as beings who possess both freedom and responsibility. This means speaking about ethics.”

The Pope also makes a point that I think Lyotard made in The Postmodern Condition, namely that datafication is limiting our ideas about what knowledge could be. AI could go further and limit our ideas about what it is to think at all. As the Pope says, “We cannot, therefore, conceal the concrete risk, inherent in its fundamental design, that artificial intelligence might limit our worldview to realities expressible in numbers and enclosed in predetermined categories, thereby excluding the contribution of other forms of truth and imposing uniform anthropological, socio-economic and cultural models.”

The Pope concludes by reminding us that we cannot avoid politics and that what we need is a healthy politics capable of creating “the conditions for such good use [of AI] to be possible and fruitful.”

Media Monitoring of the Past · impresso

Leveraging an unprecedented corpus of newspaper and radio archives, **Impresso – Media Monitoring of the Past** is an interdisciplinary research project that uses machine learning to pursue a paradigm shift in the processing, semantic enrichment, representation, exploration, and study of historical media across modalities, time, languages, and national borders.

I just learned about the Swiss project  Impresso: Media Monitoring of the Past. This project has an impressive Web application that lets you search across 76 newspapers in two languages from two countries.

Key to the larger project is using machine learning to handle multiple modalities like:

  • News text and radio broadcasts
  • Text and Images
  • French and German
  • Different countries

A Data Lab that uses IPython is coming soon. They also have documentation about a Topic Modelling tool, but I couldn’t find the actual tool.

Anyway, this strikes me as an example of an advanced multi-modal news research environment.

 

Securing Canada’s AI advantage | Prime Minister of Canada

AI is already unlocking massive growth in industries across the economy. Many Canadians are already feeling the benefits of using AI to work smarter and faster.

The Prime Minister’s office has just announced a large investment in AI. See Securing Canada’s AI advantage | Prime Minister of Canada. This is a pre-budget announcement of $2.4 billion going to AI related things including:

  • 2 billion “to build and provide access to computing capabilities and technological infrastructure for Canada’s world-leading AI researchers, start-ups, and scale-ups”
  • Setting up a “Canadian AI Safety Institute” with $50 million “to further the safe development and deployment of AI”. This sounds like a security rather than ethics institute as it will “help Canada better understand and protect against the risks of advanced or nefarious AI systems, including to specific communities.”
  • Funding for the “enforcement of the Artificial Intelligence and Data Act, with $5.1 million for the Office of the AI and Data Commissioner.”

There are also funds for startups, workers, and businesses.

The massive funding for infrastructure follows a weekend opinion piece in the Globe and Mail (March 21, 2024) on Canada’s AI infrastructure does not compute. The article suggests we have a lot of talent, but don’t have the metal. Well … now we are getting some metal.

CIFAR welcomes five new Canada CIFAR AI Chairs – CIFAR

Today CIFAR announced five new Canada CIFAR AI Chairs who will join the more than 120 Chairs already appointed at Canada’s three National AI Institutes (Amii in Edmonton, Mila in Montréal, and the Vector Institute in Toronto).

Today they announced that I have been appointed a Canada CIFAR AI Chair, CIFAR welcomes five new Canada CIFAR AI Chairs – CIFAR. Here is the U of A Folio story.

Hurrah!

Musée d’Orsay’s Van Gogh Exhibition Breaks Historic Attendance Record

The Musée d’Orsay set a record attendance of 793,556 visitors to its exhibition ‘Van Gogh in Auvers-sur-Oise’.

ARTnews has a story about how the Musée d’Orsay’s Van Gogh Exhibition Breaks Historic Attendance Record. The exhibit included a virtual reality component (Virtual Reality – Van Gogh’s Palette) where visitors could put on a headset and interact with the palette of Vincent van Gogh. You can see a 360 degree video of the experience here in French. It takes place in the room of Dr. Gachet who treated van Gogh. It starts with the piano at which his daughter Marguerite posed for a painting. Her character also narrates. Then you zoom in on a 3D rendered version of his palette where you hear about some of the paintings he did in the last 70 days of his life. They emerge from the palette.

It isn’t clear if the success of the show is due to the VR component or just the chance to see originals. We can only experience the 360 video which has limited interactivity. That said, I don’t find the video of the VR experience convincing. It is a creative documentary and it is hard to see how being immersed would make much of a difference. Was it just a gimmick to get more people to come to the show?

Elon Musk, X and the Problem of Misinformation in an Era Without Trust

Elon Musk thinks a free market of ideas will self-correct. Liberals want to regulate it. Both are missing a deeper predicament.

Jennifer Szalai of the New York Times has a good book review or essay on misinformation and disinformation, Elon Musk, X and the Problem of Misinformation in an Era Without Trust. She writes about how Big Tech (Facebook and Google) benefit from the view that people are being manipulated by social media. It helps sell their services even though there is less evidence of clear and easy manipulation. It is possible that there is an academic business of Big Disinfo that is invested in a story about fake news and its solutions. The problem instead may be a problem of the authority of elites who regularly lie to the US public. This of the lies told after 9/11 to justify the “war on terror”; why should we believe any “elite”?

One answer is to call people to “Do your own research.” Of course that call has its own agenda. It tends to be a call for unsophisticated research through the internet. Of course, everyone should do their own research, but we can’t in most cases. What would it take to really understand vaccines through your own research, as opposed to joining some epistemic community and calling research the parroting of their truisms. With the internet there is an abundance of communities of research to join that will make you feel well-researched. Who needs a PhD? Who needs to actually do original research? Conspiracies like academic communities provide safe haven for networks of ideas.