The 100 Worst Ed-Tech Debacles of the Decade

With the end of the year there are some great articles showing up reflecting on debacles of the decade. One of my favorites is The 100 Worst Ed-Tech Debacles of the DecadeEd-Tech is one of those fields where over and over techies think they know better. Some of the debacles Watters discusses:

  • 3D Printing
  • The “Flipped Classroom” (Full disclosure: I sat on a committee that funded these.)
  • Op-Eds to ban laptops
  • Clickers
  • Stories about the end of the library
  • Interactive whiteboards
  • The K-12 Cyber Incident Map (Check it out here)
  • IBM Watson
  • The Year of the MOOC

This collection of 100 terrible ideas in instructional technology should be mandatory reading for all of us who have been keen on ed-tech. (And I am one who has develop ed-tech and oversold it.) Each item is a mini essay with links worth following.

ParityBOT: Twitter bot

ParityBOT is a chatbot developed here in Edmonton that tweets positive things about women in politics in response to hateful tweets. It send empowering messages.

You can read about it in a CBC story, Engineered-in-Edmonton Twitter bot combats misogyny on the campaign trail.

The bot follows all women candidates in the election and uses some sort of AI or sentiment detection to identify nasty tweets aimed at them and then responds with a positive message from a collection crowdsources from the public. What isn’t clear is if the positive message is sent to the offending tweeter or just posted generally?

ParityBOT was developed by ParityYEG which is a collaboration between the Alberta Machine Intelligence Institute and scientist Kory Mathewson.

Slaughterbots

On the Humanist discussion list John Keating recommended the short video Slaughterbots that presents a plausible scenario where autonomous drones are used to target dissent using social media data. Watch it! It is well done and presents real issues in a credible short video.

While the short is really about autonomous weapons and the need to ban them, I note that one of ideas included is that dissent could be silenced by using social media to target people. The scenario imagines that university students who shared a dissenting video on social media have their data harvested (including images of their faces) and the drones target them using face recognition. Science fiction, but suggestive of how social media presence can be used for control.

Continue reading Slaughterbots

Linked Infrastructure For Networked Cultural Scholarship Team Meeting 2019

This weekend I was at the Linked Infrastructure For Networked Cultural Scholarship (LINCS) Team Meeting 2019. The meeting/retreat was in Banff at the Banff International Research Station and I kept my research notes at philosophi.ca.

The goal of Lincs is to create a shared linked data store that humanities projects can draw on and contribute to. This would let us link our digital resources in ways that create new intellectual connections and that allow us to reason about linked data.

Schools Are Deploying Massive Digital Surveillance Systems. The Results Are Alarming

“every move you make…, every word you say, every game you play…, I’ll be watching you.” (The Police – Every Breath You Take)

Education Week has an alarming story about how schools are using surveillance, Schools Are Deploying Massive Digital Surveillance Systems. The Results Are Alarming. The story is by Benjamin Harold and dates from May 30, 2019. It talks not only about the deployment of cameras, but the use of companies like Social Sentinel, Securly, and Gaggle that monitor social media or school computers.

Every day, Gaggle monitors the digital content created by nearly 5 million U.S. K-12 students. That includes all their files, messages, and class assignments created and stored using school-issued devices and accounts.

The company’s machine-learning algorithms automatically scan all that information, looking for keywords and other clues that might indicate something bad is about to happen. Human employees at Gaggle review the most serious alerts before deciding whether to notify school district officials responsible for some combination of safety, technology, and student services. Typically, those administrators then decide on a case-by-case basis whether to inform principals or other building-level staff members.

The story provides details that run from the serious to the absurd. It mentions concerns by the ACLU that such surveillance can desensitize children to surveillance and make it normal. The ACLU story makes a connection with laws that forbid federal agencies from studying or sharing data that could make the case for gun control. This creates a situation where the obvious ways to stop gun violence in schools aren’t studied so surveillance companies step in with solutions.

Needless to say, surveillance has its own potential harms beyond desensitization. The ACLU story lists the following potential harms:

  • Suppression of students’ intellectual freedom, because students will not want to investigate unpopular or verboten subjects if the focus of their research might be revealed.
  • Suppression of students’ freedom of speech, because students will not feel at ease engaging in private conversations they do not want revealed to the world at large.
  • Suppression of students’ freedom of association, because surveillance can reveal a students’ social contacts and the groups a student engages with, including groups a student might wish to keep private, like LGBTQ organizations or those promoting locally unpopular political views or candidates.
  • Undermining students’ expectation of privacy, which occurs when they know their movements, communications, and associations are being watched and scrutinized.
  • False identification of students as safety threats, which exposes them to a range of physical, emotional, and psychological harms.

As with the massive investment in surveillance for national security and counter terrorism purposes, we need to ask whether the cost of these systems, both financial and other, is worth it. Unfortunately, protecting children, like protecting from terrorism is hard to put a price on which makes it hard to argue against such investments.

Rights Statements

At the SpokenWeb symposium at SFI I learned about a web site RightsStatements.org. This site provides example rights statements to use and put on the web. For example In Copyright – Rights-Holder(s) Unlocatable or Unidentifiable. These often use American language rather than Canadian language, but they are a useful resource.

Another, better known source for rights statement is Creative Commons, but it is more for creators than for cultural heritage online.

Apple News and News+

After a month or so of being subscribed to Apple News+, today I dropped the subscription. It was one more subscription, and they add up. More importantly, it wasn’t giving me the news. When Notre Dame was burning Apple News+ was feeding me inane lifestyle stories. It didn’t seem to be able to gather and present current news, just glossy magazine articles. Perhaps it wasn’t meant to compete with Google News. Perhaps I wasn’t meant to pay for it.

AI, Ethics And Society

Last week we held a conference on AI, Ethics and Society at the University of Alberta. As I often do, I kept conference notes at: philosophi.ca : AI Ethics And Society.

The conference was opened by Reuben Quinn whose grandfather signed Treaty 6. He challenged us to think about what labels and labelling mean. Later Kim Tallbear challenged us to think about how we want the encounter with other intelligences to go. We don’t have a good track record of encountering the other and respecting intelligence. Now is the time to think about our positionality and to develop protocols for encounters. We should also be open to different forms of intelligence, not just ours.

Centrelink scandal

Data shows 7,456 debts were reduced to zero and another 12,524 partially reduced between July last year and March

The Guardian has a number of stories on the Australian Centrelink scandal including, Centrelink scandal: tens of thousands of welfare debts wiped or reduced. The scandal arose when the government introduce changes to a system for calculating overpayment to welfare recipients and clawing it back that removed a lot of the human oversight. The result was lots of miscalculated debts being automatically assigned to some of the most vulnerable. A report, Paying the Price of Welfare Reform, concluded that,

The research concludes that although welfare reform may be leading to cost savings for the Department of Human Services (DHS), substantial costs are being shifted to vulnerable customers and the community services that support them. It is they that are paying the price of welfare reform.

Continue reading Centrelink scandal

We Built a (Legal) Facial Recognition Machine for $60

The law has not caught up. In the United States, the use of facial recognition is almost wholly unregulated.

The New York Times has an opinion piece by Sahil Chinoy on how (they) We Built a (Legal) Facial Recognition Machine for $60. They describe an inexpensive experiment they ran where they took footage of people walking past some cameras installed in Bryant Park and compared them to known people who work in the area (scraped from web sites of organizations that have offices in the neighborhood.) Everything they did used public resources that others could use. The cameras stream their footage here. Anyone can scrape the images. The image database they gathered came from public web sites. The software is a service (Amazon’s Rekognition?) The article asks us to imagine the resources available to law enforcement.

I’m intrigued by how this experiment by the New York Times. It is a form of design thinking where they have designed something to help us understand the implications of a technology rather than just writing about what others say. Or we could say it is a form of journalistic experimentation.

Why does facial recognition spook us? Is recognizing people something we feel is deeply human? Or is it the potential for recognition in all sorts of situations. Do we need to start guarding our faces?

Facial recognition is categorically different from other forms of surveillance, Mr. Hartzog said, and uniquely dangerous. Faces are hard to hide and can be observed from far away, unlike a fingerprint. Name and face databases of law-abiding citizens, like driver’s license records, already exist. And for the most part, facial recognition surveillance can be set up using cameras already on the streets.

This is one of a number of excellent articles by the New York Times that is part of their Privacy Project.