Sexism in the Gaming Industry

Once again we are reading about sexism in the video game industry. The New York Times has a story from June 23rd on how Dozens of Women in Gaming Speak Out About Sexism and Harassment. We have heard these stories regularly since GamerGate though many of these focus on behaviour of Twitch stars. One hopes there will be some change.

Kenzie Gordon, who is doing a PhD here at the U of Alberta described why we have this persistent sexism in gaming,

The gaming industry is particularly conducive to a culture of misogyny and sexual harassment, Ms. Gordon said, because straight white men have “created the identity of the gamer as this exclusive property.” When women, people of color or L.G.B.T.Q. people try to break into the industry, she said, the “toxic geek masculinity” pushes back in ways that often lead to sexual abuse and bullying.

One positive change is happening at Ubisoft. Endgaget has a story on how the Ubisoft CEO lays out a plan to change the company’s toxic culture. This is after complaints including an extensive post by Chelsea O’Hara on Breaking My Silence at Ubisoft Toronto.

These concrete developments at companies like Ubisoft are in contrast with what happened a year before in 2019 when there was a backlash against victims who called out their harassers after indie developer Alec Holowka committed suicide. As the Wired article by Laurie Penny Gaming’s #MeToo Moment and the Tyranny of Male Fragility points out, the trolls attacked the victims using the logic that they should have known Holowka was fragile and let him be.

The message is clear: Men’s mental health matters more than women’s. Men’s suffering and self-loathing is treated as a public concern, because men are permitted to be real people whose inner lives and dreams matter. Who cares, then, how many women they destroy along the way?

What is the TikTok subculture Dark Academia?

School may be out indefinitely, but on social media there’s a thriving subculture devoted to the aesthetic of all things scholarly.

The New York Times has an article answering the question, What is the TikTok subculture Dark Academia? It describes a subculture that started on tumblr and evolved on TikTok and Instagram that values a tweedy academic aesthetic. Sort of Hogwarts meets humanism. Alas, just as the aesthetics of humanities academic culture becomes a thing, it gets superseded by Goblincore or does it just fade like a pressed flower.

Now we need to start a retro Humanities Computing aesthetic.

CEO of exam monitoring software Proctorio apologises for posting student’s chat logs on Reddit

Australian students who have raised privacy concerns describe the incident involving a Canadian student as ‘freakishly disrespectful’

The Guardian has a story about CEO of exam monitoring software Proctorio apologises for posting student’s chat logs on Reddit. Proctorio provides software for monitoring (proctoring) students on their own laptop while they take exams. It uses the video camera and watches the keyboard to presumably watch whether the student tries to cheat on a timed exam. Apparently a UBC student claimed that he couldn’t get help in a timely fashion from Proctorio when he was using it (presumably with a timer going for the exam.) This led to Australian students criticizing the use of Proctorio which led to the CEO arguing that the UBC student had lied and providing a partial transcript to show that the student was answered in a timely fashion. That the CEO would post a partial transcript shows that:

  1. staff at Proctorio do have access to the logs and transcripts of student behaviour, and
  2. that they don’t have the privacy protection protocols in place to prevent the private information from being leaked.

I can’t help feeling that there is a pattern here since we also see senior politicians sometimes leaking data about citizens who criticize them. The privacy protocols may be in place, but they aren’t observed or can’t be enforced against the senior staff (who are the ones that presumably need to do the enforcing.) You also sense that the senior person feels that the critic abrogated their right to privacy by lying or misrepresenting something in their criticism.

This raises the question of whether someone who misuses or lies about a service deserves the ongoing protection of the service. Of course, we want to say that they should, but nations like the UK have stripped citizens like Shamina Begum of citizenship and thus their rights because they behaved traitorously, joining ISIS. Countries have murdered their own citizens that became terrorists without a trial. Clearly we feel that in some cases one can unilaterally remove someones rights, including the right to life, because of their behaviour.

The bad things that happen when algorithms run online shops

Smart software controls the prices and products you see when you shop online – and sometimes it can go spectacularly wrong, discovers Chris Baraniuk.

The BBC has a stroy about The bad things that happen when algorithms run online shops. The story describes how e-commerce systems designed to set prices dynamically (in comparison with someone else’s price, for example) can go wrong and end up charging customers much more than they will pay or charging them virtually nothing so the store loses money.

The story links to an instructive blog entry by Michael Eisen about how two algorithms pushed up the price on a book into the millions, Amazon’s $23,698,655.93 book about flies. The blog entry is a perfect little story about about the problems you get when you have algorithms responding iteratively to each other without any sanity checks.

MIT apologizes, permanently pulls offline huge dataset that taught AI systems to use racist, misogynistic slurs

Vinay Prabhu, chief scientist at UnifyID, a privacy startup in Silicon Valley, and Abeba Birhane, a PhD candidate at University College Dublin in Ireland, pored over the MIT database and discovered thousands of images labelled with racist slurs for Black and Asian people, and derogatory terms used to describe women. They revealed their findings in a paper undergoing peer review for the 2021 Workshop on Applications of Computer Vision conference.

Another one of those “what were they thinking when they created the dataset stories” from The Register tells about how MIT apologizes, permanently pulls offline huge dataset that taught AI systems to use racist, misogynistic slurs. The MIT Tiny Images dataset was created automatically using scripts that used the WordNet database of terms which itself held derogatory terms. Nobody thought to check either the terms taken from WordNet or the resulting images scoured from the net. As a result there are not only lots of images for which permission was not secured, but also racists, sexist, and otherwise derogatory labels on the images which in turn means that if you train an AI on these it will generate racist/sexist results.

The article also mentions a general problem with academic datasets. Companies like Facebook can afford to hire actors to pose for images and can thus secure permissions to use the images for training. Academic datasets (and some commercial ones like the Clearview AI  database) tend to be scraped and therefore will not have the explicit permission of the copyright holders or people shown. In effect, academics are resorting to mass surveillance to generate training sets. One wonders if we could crowdsource a training set by and for people?

What coding really teaches children

You’ve seen movies where programmers pound out torrents of code? That is nothing like reality. Most of the time, coders don’t type at all; they sit and stare morosely at the screen, running their hands through their hair, trying to spot what they’ve done wrong. It can take hours, days, or even weeks. But once the bug is fixed and the program starts working again, the burst of pleasure has a narcotic effect.

Stéfan pointed me to a nice opinion piece about programming education in the Globe titled, Opinion: What coding really teaches children. Clive Thompson that teaching programming in elementary school will not necessarily teach math but it can teach kids about the digital world and teach them the persistence it takes to get complex things working. He also worries, as I do, about asking elementary teachers to learn enough coding to be able to teach it. This could be a recipe for alienating a lot of students who are taught by teachers who haven’t learned.

Internet Archive closes the National Emergency Library

Within a few days of the announcement that libraries, schools and colleges across the nation would be closing due to the COVID-19 global pandemic, we launched the temporary National Emergency Library to provide books to support emergency remote teaching, research activities, independent scholarship, and intellectual stimulation during the closures.  […]

According to the Internet Archive blog the Temporary National Emergency Library to close 2 weeks early, returning to traditional controlled digital lending. The National Emergency Library (NEL) was open to anyone in the world during a time when physical libraries were closed. It made books the IA had digitized available to read online. It was supposed to close at the end of June because four commercial publishers decided to sue. 

The blog entry points to what the HathiTrust is doing as part of their Emergency Temporary Access Service which lets libraries that are members (and the U of Alberta Library is one) provide access to digital copies of books they have corresponding physical copies of. This is only available to “member libraries that have experienced unexpected or involuntary, temporary disruption to normal operations, requiring it to be closed to the public”. 

It is a pity the IS NEL was discontinued, for a moment there it looked like large public service digital libraries might become normal. Instead it looks like we will have a mix of commercial e-book services and Controlled Digital Lending (CDL) offered by libraries that have the physical books and the digital resources to organize it. The IA blog entry goes on to note that even CDL is under attack. Here is a story from Plagiarism Today:

Though the National Emergency Library may have been what provoked the lawsuit, the complaint itself is much broader. Ultimately, it targets the entirety of the IA’s digital lending practices, including the scanning of physical books to create digital books to lend.

The IA has long held that its practices are covered under the concept of controlled digital lending (CDL). However, as the complaint notes, the idea has not been codified by a court and is, at best, very controversial. According to the complaint, the practice of scanning a physical book for digital lending, even when the number of copies is controlled, is an infringement.

SimRefinery and Maxis Business Simulations

SimRefinery Screenshot

SimRefinery was the first simulation developed by a Maxis spin-off company called Maxis Business Simulations (MBS). The simulation was for Chevron and was developed using the development tools Maxis had developed for their games like SimCity. Phil Salvador tells a wonderful story about MBS and SimRefinery in a thoroughly research essay When SimCity got serious: the story of Maxis Business Simulations and SimRefinery. Take some time out and read it.

Part of what is interesting in the essay is how Salvador documents the different views about what such simulations were good for. SimRefinery was not a accurate simulation that would cover the complexity of the chemical engineering of a refinery; so what was it good for. Chevron apparently wanted something to help the staff who weren’t engineers to understand some of the connectiveness of a refinery – how problems in one area could impact others. Will Wright, the genius behind Maxis, didn’t think serious simulations were possible or something they wanted to do. He saw SimCity as a caricature that was fun. At best it might give people a “mental model” of the issues around city management. It was for that reason that MBS was a spin-off designed to contract with businesses that felt serious simulations were feasible and useful.

I learned about the Salvador article from a Ars Technica story about SimRefinery and how A lost Maxis “Sim” game has been discovered by an Ars reader [Updated]. The story talks about how someone found and uploaded to the Internet Archive a prototype of SimRefinery only to later take in back down so it is no longer available. In the meantime Phil Salvador recorded a Twitch stream of checking out the game so you can get a sense of how it worked.

Obscure Indian cyber firm spied on politicians, investors worldwide

A cache of data reviewed by Reuters provides insight into the operation, detailing tens of thousands of malicious messages designed to trick victims into giving up their passwords that were sent by BellTroX between 2013 and 2020.

It was bound to happen. Reuters has an important story that an  Obscure Indian cyber firm spied on politicians, investors worldwide. The firm, BellTroX InfoTech Services, offered hacking services to private investigators and others. While we focus on state-sponsored hacking and misinformation there is a whole murky world of commercial hacking going on.

The Citizen Lab played a role in uncovering what BellTroX was doing. They have a report here about Dark Basin, a hacking-for-hire outfit, that they link to BellTroX. The report is well worth the read as it details the infrastructure uncovered, the types of attacks, and the consequences.

The growth of a hack-for-hire industry may be fueled by the increasing normalization of other forms of commercialized cyber offensive activity, from digital surveillance to “hacking back,” whether marketed to private individuals, governments or the private sector. Further, the growth of private intelligence firms, and the ubiquity of technology, may also be fueling an increasing demand for the types of services offered by BellTroX. At the same time, the growth of the private investigations industry may be contributing to making such cyber services more widely available and perceived as acceptable.

They conclude that the growth of this industry is a threat to civil society.

What is it became so affordable and normalized that any unscrupulous person could hire hackers to harass an ex-girlfriend or neighbour?

CSDH / SCHN 2020 was brilliant online

Today was the last day of the CSDH / SCHN 2020 online conference. You can see my conference notes here. The conference had to go online due to Covid-19 and the cancellation of Congress 2020. That said, the online conference web brilliantly. The Programme Committee, chaired by Kim Martin, deserve a lot of credit as do the folks at the U of Alberta Arts Resource Centre who provided technical support. Some of the things they did that

  • The schedule has a single track across 5 days rather than parallel tracks over 3 days. See the schedule.
  • There were only 3 and half hours of sessions a day (from 9:00am to 12:30 Western time) so you could get other things done. (There were also hangout sessions before and after.)
  • Papers (or prepared presentations) had to be put up the week before on Humanities Commons.
  • The live presentations during the conference were thus kept to 3 minutes or so, which allowed sessions to be shorter which allowed them to have a single track.
  • They had a chair and a respondent for each session which meant that there was a lot of discussion instead of long papers and no time for questions. In fact, the discussion seemed better than at on site conferences.
  • They used Eventbrite for registration, Zoom for the registrants-only parts of the conference, and Google Meet for the open parts.
  • They had hangout or informal sessions at the beginning and end of each day where more informal discussion could take place.

The nice thing about the conference was that they took advantage of the medium. As none of us had flown to London, Ontario, they were able to stretch the conference over 5 days, but not use up the entire day.

All told, I think they have shown that an online conference can work surprisingly well if properly planned and supported.