Engaged Humanities Partnerships Between Academia And Tribal Communities

Last week the Oregon Humanities Center put on a great two-day conference on Engaged Humanities Partnerships Between Academia And Tribal Communities that I attended. (See my conference notes here.) The conference looked at ways that the humanities can partner with indigenous communities.

One of the highlights was Jennifer O’Neal’s talk about the importance of decolonizing the archives and work she is doing towards that. You can see a paper by her on the subject titled “The Right to Know”: Decolonizing Native American Archives.

I talked about the situation in Canada in general, and the University of Alberta in particular, after the Truth and Reconciliation Commission.

Schools Are Deploying Massive Digital Surveillance Systems. The Results Are Alarming

“every move you make…, every word you say, every game you play…, I’ll be watching you.” (The Police – Every Breath You Take)

Education Week has an alarming story about how schools are using surveillance, Schools Are Deploying Massive Digital Surveillance Systems. The Results Are Alarming. The story is by Benjamin Harold and dates from May 30, 2019. It talks not only about the deployment of cameras, but the use of companies like Social Sentinel, Securly, and Gaggle that monitor social media or school computers.

Every day, Gaggle monitors the digital content created by nearly 5 million U.S. K-12 students. That includes all their files, messages, and class assignments created and stored using school-issued devices and accounts.

The company’s machine-learning algorithms automatically scan all that information, looking for keywords and other clues that might indicate something bad is about to happen. Human employees at Gaggle review the most serious alerts before deciding whether to notify school district officials responsible for some combination of safety, technology, and student services. Typically, those administrators then decide on a case-by-case basis whether to inform principals or other building-level staff members.

The story provides details that run from the serious to the absurd. It mentions concerns by the ACLU that such surveillance can desensitize children to surveillance and make it normal. The ACLU story makes a connection with laws that forbid federal agencies from studying or sharing data that could make the case for gun control. This creates a situation where the obvious ways to stop gun violence in schools aren’t studied so surveillance companies step in with solutions.

Needless to say, surveillance has its own potential harms beyond desensitization. The ACLU story lists the following potential harms:

  • Suppression of students’ intellectual freedom, because students will not want to investigate unpopular or verboten subjects if the focus of their research might be revealed.
  • Suppression of students’ freedom of speech, because students will not feel at ease engaging in private conversations they do not want revealed to the world at large.
  • Suppression of students’ freedom of association, because surveillance can reveal a students’ social contacts and the groups a student engages with, including groups a student might wish to keep private, like LGBTQ organizations or those promoting locally unpopular political views or candidates.
  • Undermining students’ expectation of privacy, which occurs when they know their movements, communications, and associations are being watched and scrutinized.
  • False identification of students as safety threats, which exposes them to a range of physical, emotional, and psychological harms.

As with the massive investment in surveillance for national security and counter terrorism purposes, we need to ask whether the cost of these systems, both financial and other, is worth it. Unfortunately, protecting children, like protecting from terrorism is hard to put a price on which makes it hard to argue against such investments.

Making AI accountable easier said than done, says U of A expert

The Folio has a story on the ethics of AI that quotes me with the title, Making AI accountable easier said than done, says U of A expert.

One of issues that interests me the most now is the history of this discussion. We tend to treat the ethics of AI as a new issue, but people have been thinking about how automation would affect people for some time. There have been textbooks for teaching Computer Ethics like that of Deborah G. Johnson since the 1980s. As part of research we did on how computer were presented in the news we found articles in the 1960s about how automation might put people out of work. They weren’t thinking of AI then, but the ethical and social effects that concerned people back then were similar. What few people discussed, however, was how automation affected different groups differently. Michele Landsberg wrote a prescient article on “Will Computer Replace the Working Girl?” in 1964 for the women’s section of The Globe and Mail that argued that is was women in the typing pools that were being put out of work. Likewise I suspect that some groups be more affected by AI than others and that we need to prepare for that.

Addressing the issue of how universities might prepare for the disruption of artificial intelligence is a good book, Robot-Proof: Higher Education in the Age of Artificial Intelligence by Joseph Aoun (MIT Press, 2017).

Instead of educating college students for jobs that are about to disappear under the rising tide of technology, twenty-first-century universities should liberate them from outdated career models and give them ownership of their own futures. They should equip them with the literacies and skills they need to thrive in this new economy defined by technology, as well as continue providing them with access to the learning they need to face the challenges of life in a diverse, global environment.

Big Tech’s Half-Hearted Response To Fake News And Election Hacking

Despite big hand waves, Facebook, Google, and Twitter aren’t doing enough to stop misinformation.

From slashdot I found a story about : Big Tech’s Half-Hearted Response To Fake News And Election Hacking. This Fast Company story talks about ways that social media companies are trying to prevent the misuse of their platforms as we head into the US midterms.

For Facebook, Google, and Twitter the fight against fake news seems to be two-pronged: De-incentivize the targeted content and provide avenues to correct factual inaccuracies. These are both surface fixes, however, akin to putting caulk on the Grand Canyon.

And, despite grand hand waves, both approaches are reactive. They don’t aim at understanding how this problem became prevalent, or creating a method that attacks the systemic issue. Instead these advertising giants implement new mechanisms by which people can report one-off issues—and by which the platforms will be left playing cat-and-mouse games against fake news—all the while giving no real clear glimpse into their opaque ad platforms.

The problem is that these companies make too much money from ads and elections are a chance to get lots of ads, manipulative or not. For that matter, what political ad doesn’t try to manipulate viewers?

The slashdot story was actually about Mozilla’s Responsible Computer Science Challenge which will support initiatives to embedd ethics in computer science courses. Alas, the efficacy of ethics courses is questionable. Aristotle would say that if you don’t have the disposition to be ethical no amount of training would do any good. It just helps the unethical pretend to be ethical.

Re-Imagining Education In An Automating World conference at George Brown

On May 25th I had a chance to attend a gem of a conference organized the Philosophy of Education (POE) committee at George Brown. They organized a conference with different modalities from conversations to formal talks to group work. The topic was Re-Imagining Education in An Automating World (see my conference notes here) and this conference is a seed for a larger one next year.

I gave a talk on Digital Citizenship at the end of the day where I tried to convince people that:

  • Data analytics are now a matter of citizenship (we all need to understand how we are being manipulated).
  • We therefore need to teach data literacy in the arts and humanities, so that
  • Students are prepared to contribute to and critique the ways analytics are used deployed.
  • This can be done by integrating data and analytical components in any course using field-appropriate data.

 

Research Team Security

One of the researchers in the GamerGate Reactions team has created a fabulous set of recommendations for team members doing dangerous research. See Security_Recommendations_2018_v2.0. This document brings together in one place a lot of information and links on how to secure your identity and research. The researcher put this together in support of a panel that I am chairing this afternoon on Risky Research that is part of a day of panels/workshops following the Edward Snowden talk yesterday evening. (You can see my blog entry on Snowden’s talk here.) The key topics covered include:

  • Basic Security Measures
  • Use End-to-End Encryption for Communications  Encrypt Your Computer
  • Destroy All Information
  • Secure Browsing
  • Encrypt all Web Traffic
  • Avoiding Attacks
  • On Preventing Doxing
  • Dealing with Harassment

Opinion | America’s Real Digital Divide

The problem isn’t that poor children don’t have access to computers. It’s that they spend too much time in front of them.

The New York Times has an important Opinion about America’s Real Digital Divide by Naomi S. Riley from Feb. 11, 2018. She argues that TV and video game screen time is bad for children and there is no evidence that computer screen time is helpful. The digital divide is not one of access to screens but one of attitude and education on screen time.

But no one is telling poorer parents about the dangers of screen time. For instance, according to a 2012 Pew survey, just 39 percent of parents with incomes of less than $30,000 a year say they are “very concerned” about this issue, compared with about six in 10 parents in higher-earning households.

Social networks are creating a global crisis of democracy

[N]etworks themselves offer ways in which bad actors – and not only the Russian government – can undermine democracy by disseminating fake news and extreme views. “These social platforms are all invented by very liberal people on the west and east coasts,” said Brad Parscale, Mr. Trump’s digital-media director, in an interview last year. “And we figure out how to use it to push conservative values. I don’t think they thought that would ever happen.” Too right.

The Globe and Mail this weekend had an essay by Niall Ferguson on how Social networks are creating a global crisis of democracy. The article is based on Ferguson’s new book The Square and the Tower: Networks and Power from the Freemasons to Facebook. The article points out that manipulation is not just an American problem, but also points out that the real problem is our dependence on social networks in the first place.

Continue reading Social networks are creating a global crisis of democracy

Canadian Social Knowledge Institute

I just got an email announcing the soft launch of the Canadian Social Knowledge Institute (C-SKI). This institute grew out of the Electronic Textual Culture Lab and the INKE project. Part of C-SKI is a Open Scholarship Policy Observatory which has a number of partners through INKE.

The Canadian Social Knowledge Institute (C-SKI) actively engages issues related to networked open social scholarship: creating and disseminating research and research technologies in ways that are accessible and significant to a broad audience that includes specialists and active non-specialists. Representing, coordinating, and supporting the work of the Implementing New Knowledge Environments (INKE) Partnership, C-SKI activities include awareness raising, knowledge mobilization, training, public engagement, scholarly communication, and pertinent research and development on local, national, and international levels. Originated in 2015, C-SKI is located in the Electronic Textual Cultures Lab in the Digital Scholarship Centre at UVic.

Naylor Report in Voyant

Correspondence Analysis (ScatterPlot) View

The Naylor Report (PDF) about research funding in Canada is out and we put it in Voyant. Here are some different

Continue reading Naylor Report in Voyant