CIFAR welcomes five new Canada CIFAR AI Chairs – CIFAR

Today CIFAR announced five new Canada CIFAR AI Chairs who will join the more than 120 Chairs already appointed at Canada’s three National AI Institutes (Amii in Edmonton, Mila in Montréal, and the Vector Institute in Toronto).

Today they announced that I have been appointed a Canada CIFAR AI Chair, CIFAR welcomes five new Canada CIFAR AI Chairs – CIFAR. Here is the U of A Folio story.

Hurrah!

Musée d’Orsay’s Van Gogh Exhibition Breaks Historic Attendance Record

The Musée d’Orsay set a record attendance of 793,556 visitors to its exhibition ‘Van Gogh in Auvers-sur-Oise’.

ARTnews has a story about how the Musée d’Orsay’s Van Gogh Exhibition Breaks Historic Attendance Record. The exhibit included a virtual reality component (Virtual Reality – Van Gogh’s Palette) where visitors could put on a headset and interact with the palette of Vincent van Gogh. You can see a 360 degree video of the experience here in French. It takes place in the room of Dr. Gachet who treated van Gogh. It starts with the piano at which his daughter Marguerite posed for a painting. Her character also narrates. Then you zoom in on a 3D rendered version of his palette where you hear about some of the paintings he did in the last 70 days of his life. They emerge from the palette.

It isn’t clear if the success of the show is due to the VR component or just the chance to see originals. We can only experience the 360 video which has limited interactivity. That said, I don’t find the video of the VR experience convincing. It is a creative documentary and it is hard to see how being immersed would make much of a difference. Was it just a gimmick to get more people to come to the show?

Elon Musk, X and the Problem of Misinformation in an Era Without Trust

Elon Musk thinks a free market of ideas will self-correct. Liberals want to regulate it. Both are missing a deeper predicament.

Jennifer Szalai of the New York Times has a good book review or essay on misinformation and disinformation, Elon Musk, X and the Problem of Misinformation in an Era Without Trust. She writes about how Big Tech (Facebook and Google) benefit from the view that people are being manipulated by social media. It helps sell their services even though there is less evidence of clear and easy manipulation. It is possible that there is an academic business of Big Disinfo that is invested in a story about fake news and its solutions. The problem instead may be a problem of the authority of elites who regularly lie to the US public. This of the lies told after 9/11 to justify the “war on terror”; why should we believe any “elite”?

One answer is to call people to “Do your own research.” Of course that call has its own agenda. It tends to be a call for unsophisticated research through the internet. Of course, everyone should do their own research, but we can’t in most cases. What would it take to really understand vaccines through your own research, as opposed to joining some epistemic community and calling research the parroting of their truisms. With the internet there is an abundance of communities of research to join that will make you feel well-researched. Who needs a PhD? Who needs to actually do original research? Conspiracies like academic communities provide safe haven for networks of ideas.

CEO Reminds Everyone His Company Collects Customers’ Sleep Data to Make Zeitgeisty Point About OpenAI Drama

The Eight Sleep pod is a mattress topper with a terms of service and a privacy policy. The company “may share or sell” the sleep data it collects from its users.

From SlashDot a story about how a CEO Reminds Everyone His Company Collects Customers’ Sleep Data to Make Zeitgeisty Point About OpenAI Drama. The story is worrisome because of the data being gathered by a smart mattress company and the use it is being put to. I’m less sure of the CEO’s (Matteo Franceschetti) inferences from his data and his call to “fix this.” How would Eight Sleep fix this? Sell more product?

The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023

Today and tomorrow representatives from a number of countries have gathered at Bletchley Park to discuss AI safety. Close to 30 countries, including Canada were represented and they issued The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023. This declaration starts with,

Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity. To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible.

The declaration discusses opportunities and the need to support innovation, but also mentions that “AI also poses significant risks” and mentions the usual suspects, especially “capable, general-purpose models” that could be repurposed for misuse.

What stands out is the commitment to international collaboration among the major players, including China. This is a good sign.

Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation. We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe, and supports the good of all through existing international fora and other relevant initiatives, to promote cooperation to address the broad range of risks posed by AI.

Bletchley Park is becoming a UK symbol of computing. It was, of course, where the Allied code-breaking centre was set up. It is where Turing worked on the Colossus, an important early computer used to decode the German ciphers and give the Allies a crucial advantage. It is appropriate that UK Prime Minister Sunak has used this site to gather representatives. Unfortunately few leaders joined him there, sending representatives instead, though Trudeau may show up on the 2nd.

Alas, the Declaration is short on specifics though individual countries like the United States and Canada are securing voluntary commitments from players to abide by codes of conduct. China and the EU are also passing laws regulating artificial intelligence.

One thing not mentioned at all are the dangers of military uses of AI. It is as if warbots are off the table in AI safety discussions.

The good news is that there will be follow up meetings at which we can hope that concrete agreements might be worked out.

 

 

 

Lit sounds: U of A experts help rescue treasure trove of audio cultural history

A U of A professor is helping to rescue tens of thousands of lost audio and video recordings — on tape, film, vinyl or any other bygone media — across North America.

The Folio has a nice story about the SpokenWeb project that I am part of, Lit sounds: U of A experts help rescue treasure trove of audio cultural history. The article discusses the collaboration and importance of archiving to scholarship.

OpenAI Changes its Core Values

An article on Semafor points out that OpenAI has changed their list of “Core Values” on their Careers page. Previously, they listed their values as being:

Audacious, Thoughtful, Unpretentious, Pragmatic & Impact-Driven, Collaborative, and Growth-oriented

Now, the list of values has been changed to:

AGI focus, Intense and scrappy, Scale, Make something people love, Team spirit

In particular, the first value reads:

AGI focus

We are committed to building safe, beneficial AGI that will have a massive positive impact on humanity’s future.

Anything that doesn’t help with that is out of scope.

This is an unambiguous change from the value of being “Audacious”, which they had glossed with “We make big bets and are unafraid to go against established norms.” They are now committed to AGI (Artificial General Intelligence) which they define on their Charter page as “highly autonomous systems that outperform humans at most economically valuable work”.

It would appear that they are committed to developing AGI that can outperform humans at work that pays and making that beneficial. I can’t help wondering why they aren’t also open to developing AGIs that can perform work that isn’t necessarily economically valuable. For that matter, what if the work AGIs can do becomes uneconomic because it can be cheaply done by an AI?

More challenging is the tension around developing AIs that can outperform humans at work that pays. How can creating AGIs that can take our work become a value? How will they make sure this is going to benefit humanity? Is this just a value in the sense of a challenge (can we make AIs that can make money?) or is there an underlying economic vision, and what would that be? I’m reminded of the ambiguous picture Ishiguro presents in Klara and the Sun of a society where only a minority of people are competitive with AIs.

Diversity Commitment

Right above the list of core values on the Careers page, there is a strong diversity statement that reads:

The development of AI must be carried out with a knowledge of and respect for the perspectives and experiences that represent the full spectrum of humanity.

This is not in the list of values, but it is designed to stand out and open the values. One wonders if this is just an afterthought or virtue signalling. Given that it is on the Careers page, it could be a warning about what they expect of applicants. “Don’t apply unless you can talk EDI!” It isn’t a commitment to diverse hirings; it is more about what they expect potential hires to know and respect.

Now, they can develop a chatbot that can test applicant’s knowledge and respect of diversity and save themselves the trouble of diversity hiring.

(Minor edits suggested by ChatGPT.)

Call for papers 2024 – Replaying Japan

Replaying Japan 2024  – The 12th International Japan Game Studies Conference – [Conference Theme] Preservation, Innovation and New Directions in Japanese Game Studies [Dates] Monday, August 19 (University at Buffalo, SUNY) Tuesday, August 20 (University at Buffalo, SUNY) Wednesday, August 21 (The Strong National Museum of Play) [Locations] University at Buffalo, SUNY (North Campus) and … Continue reading “Call for papers 2024”

The Call for Papers for Replaying Japan 2024 has just gone out. The theme is Preservation, Innovation and New Directions in Japanese Game Studies.

The conference which is being organized by Tsugumi (Mimi) Okabe at the University of Buffalo is also going to have one day at the Strong National Museum of Play in Rochester which has a fabulous collection of Japanese video game artefacts.

The conference could be considered an example of regional game studies, but Japan is hardly at the periphery of the games industry even if it is under represented in game studies as a field. It might be more accurate to describe the conference and community that has gathered around it as a inter-regional conference where people bring very different perspectives on game studies to international discussion of Japanese game culture.

The AP lays the groundwork for an AI-assisted newsroom

The Associated Press published standards today for generative AI use in its newsroom.

As we deal with we deal with the changes brought about by this recent generation of chatbots in the academy we could learn from guidelines emerging from other fields like journalism. Endgadget reports that  The AP lays the groundwork for an AI-assisted newsroom and you can see the Associated Press guidelines here.

Accuracy, fairness and speed are the guiding values for AP’s news report, and we believe the mindful use of artificial intelligence can serve these values and over time improve how we work.

AP also suggests they don’t see chatbots replacing journalists any time soon as the “the central role of the AP journalist – gathering, evaluating and ordering facts into news stories, video, photography and audio for our members and customers – will not change.”

It should be noted (as AP does) that they have an agreement with OpenAI.

‘New York Times’ considers legal action against OpenAI as copyright tensions swirl : NPR

The news publisher and maker of ChatGPT have held tense negotiations over striking a licensing deal for the use of the paper’s articles to train the chatbot. Now, legal action is being considered.

Finally we are seeing a serious challenge to the way AI companies are exploiting written resources on the web as the New York Times engaged Open AI,  ‘New York Times’ considers legal action against OpenAI as copyright tensions swirl.

A top concern for the Times is that ChatGPT is, in a sense, becoming a direct competitor with the paper by creating text that answers questions based on the original reporting and writing of the paper’s staff.

It remains to be seen what the legalities are. Does using a text in order to train a model constitute the making of a copy in violation of copyright? Does the model contain something equivalent to a copy of the original? These issues are being explored in the AI image generating space where Stability AI is being sued by Getty Images. I hope the New York Times doesn’t just settle quietly before there is a public airing of the issues around the exploitation/ownership of written work. I also note that the Author’s Guild is starting to advocate on behalf of authors,

“It says it’s not fair to use our stuff in your AI without permission or payment,” said Mary Rasenberger, CEO of The Author’s Guild. The non-profit writers’ advocacy organization created the letter, and sent it out to the AI companies on Monday. “So please start compensating us and talking to us.”

This could also have repercussions in academia as many of us scrape the web and social media when studying contemporary issues. For that matter what do we think about the use of our work? One could say that our work, supported as it is by the public, should be fair game from gathering, training and innovative reuse. Aren’t we supported for the public good? Perhaps we should assert that academic prose is available for training models?

What are our ethics?