Conference: Artificial Intelligence for Information Accessibility

AI for Society and the Kule Institute for Advanced Research helped organize a conference on Artificial Intelligence for Information Accessibility (AI4IA) on September 28th, 2020. This conference was organized on the International Day for Universal Access to Information which is why the focus was on how AI can be important to access to information. An important partner in the conference was the UNESCO Information For All Programme (IFAP) Working Group on Information Accessibility (WGIA)

International Day for Universal Access to Information focused on the right to information in times of crisis and on the advantages of having constitutional, statutory and/or policy guarantees for public access to information to save lives, build trust and help the formulation of sustainable policies through and beyond the COVID-19 crisis. Speakers talked about how vital access to accurate information is in these pandemic times and the role artificial intelligence could play as we prepare for future crises. Tied to this was a discussion of the important role for international policy initiatives and shared regulation in ensuring that smaller countries, especially in the Global South, benefit from developments in AI. The worry is that some countries won’t have the digital literacy or cadre of experts to critically guide the introduction of AI.

The AI4S Associate Director, Geoffrey Rockwell, kept conference notes on the talks here,  Conference Notes on AI4AI 2020.

Why Uber’s business model is doomed

Like other ridesharing companies, it made a big bet on an automated future that has failed to materialise, says Aaron Benanav, a researcher at Humboldt University

Aaron Benanav has an important opinion piece in The Guardian about Why Uber’s business model is doomed. Benanav argues that Uber and Lyft’s business model is to capture market share and then ditch the drivers they have employed for self-driving cars as they become reliable. In other words they are first disrupting the human taxi services so as to capitalize on driverless technology when it comes. Their current business is losing money as they feast on venture capital to get market share and if they can’t make the switch to driverless it is likely they go bankrupt.

This raises the question of whether we will see driverless technology good enough to oust the human drivers? I suspect that we will see it for certain geo-fenced zones where Uber and Lyft can pressure local governments to discipline the streets so as to be safe for driverless. In countries with chaotic and hard to accurately map streets (think medieval Italian towns) it may never work well enough.

All of this raises the deeper ethical issue of how driverless vehicles in particular and AI in general are being imagined and implemented. While there may be nothing unethical about driverless cars per se, there IS something unethical about a company deliberately bypassing government regulations, sucking up capital, driving out the small human taxi businesses, all in order to monopolize a market that they can then profit on by firing the drivers that got them there for driverless cars. Why is this the way AI is being commercialized rather than trying to create better public transit systems or better systems for helping people with disabilities? Who do we hold responsible for the decisions or lack of decisions that sees driverless AI technology implemented in a particularly brutal and illegal fashion. (See Benanav on the illegality of what Uber and Lyft are doing by forcing drivers to be self-employed contractors despite rulings to the contrary.)

It is this deeper set of issues around the imagination, implementation, and commercialization of AI that needs to be addressed. I imagine most developers won’t intentionally create unethical AIs, but many will create cool technologies that are commercialized by someone else in brutal and disruptive ways. Those commercializing and their financial backers (which are often all of us and our pension plans) will also feel no moral responsibility because we are just benefiting from (mostly) legal innovative businesses. Corporate social responsibility is a myth. At most corporate ethics is conceived of as a mix of public relations and legal constraints. Everything else is just fair game and the inevitable disruptions in the marketplace. Those who suffer are losers.

This then raises the issue of the ethics of anticipation. What is missing is imagination, anticipation and planning. If the corporate sector is rewarded for finding ways to use new technologies to game the system, then who is rewarded for planning for the disruption and, at a minimum, lessening the impact on the rest of us? Governments have planning units like city planning units, but in every city I’ve lived in these units are bypassed by real money from developers unless there is that rare thing – a citizen’s revolt. Look at our cities and their spread – despite all sorts of research and a history of spread, there is still very little discipline or planning to constrain the developers. In an age when government is seen as essentially untrustworthy planning departments start from a deficit of trust. Companies, entrepreneurs, innovation and yes, even disruption, are blessed with innocence as if, like children, they just do their thing and can’t be expected to anticipate the consequences or have to pick up after their play. We therefore wait for some disaster to remind everyone of the importance of planning and systems of resilience.

Now … how can teach this form of deeper ethics without sliding into political thought?

The Man Behind Trump’s Facebook Juggernaut

Brad Parscale used social media to sway the 2016 election. He’s poised to do it again.

I just finished reading important reporting about The Man Behind Trump’s Facebook Juggernaut in the March 9th, 2020 issue of the New Yorker. The long article suggests that it wasn’t Cambridge Analytica or the Russians who swung the 2016 election. If anything had an impact it was the extensive use of social media, especially Facebook, by the Trump digital campaign under the leadership of Brad Parscale. The Clinton campaign focused on TV spots and believed they were going to win. The Trump campaign gathered lots of data, constantly tried new things, and drew on their Facebook “embed” to improve their game.

If each variation is counted as a distinct ad, then the Trump campaign, all told, ran 5.9 million Facebook ads. The Clinton campaign ran sixty-six thousand. “The Hillary campaign thought they had it in the bag, so they tried to play it safe, which meant not doing much that was new or unorthodox, especially online,” a progressive digital strategist told me. “Trump’s people knew they didn’t have it in the bag, and they never gave a shit about being safe anyway.” (p. 49)

One interesting service Facebook offered was “Lookalike Audiences” where you could upload a spotty list of information about people and Facebook would first fill it out from their data and then find you more people who are similar. This lets you expand your list of people to microtarget (and Facebook gets you paying for more targeted ads.)

The end of the article gets depressing as it recounts how little the Democrats are doing to counter or match the social media campaign for Trump which was essentially underway right after the 2016 election. One worries, by the end, that we will see a repeat.

Marantz, Andrew. (2020, March 9). “#WINNING: Brad Parscale used social media to sway the 2016 election. He’s posed to do it again.” New Yorker. Pages 44-55.

Documenting the Now (and other social media tools/services)

Documenting the Now develops tools and builds community practices that support the ethical collection, use, and preservation of social media content.

I’ve been talking with the folks at MassMine (I’m on their Advisory Board) about tools that can gather information off the web and I was pointed to the Documenting the Now project that is based at the University of Maryland and the University of Virginia with support from Mellon. DocNow have developed tools and services around documenting the “now” using social media. DocNow itself is an “appraisal” tool for twitter archiving. They then have a great catalog of twitter archives they and others have gathered which looks like it would be great for teaching.

MassMine is at present a command-line tool that can gather different types of social media. They are building a web interface version that will make it easier to use and they are planning to connect it to Voyant so you can analyze results in Voyant. I’m looking forward to something easier to use than Python libraries.

Speaking of which, I found a TAGS (Twitter Archiving Google Sheet) which is a plug-in for Google Sheets that can scrape smaller amounts of Twitter. Another accessible tool is Octoparse that is designed to scrape different database driven web sites. It is commercial, but has a 14 day trial.

One of the impressive features of Documenting the Now project is that they are thinking about the ethics of scraping. They have a Social Labels set for people to indicate how data should be handled.

CEO of exam monitoring software Proctorio apologises for posting student’s chat logs on Reddit

Australian students who have raised privacy concerns describe the incident involving a Canadian student as ‘freakishly disrespectful’

The Guardian has a story about CEO of exam monitoring software Proctorio apologises for posting student’s chat logs on Reddit. Proctorio provides software for monitoring (proctoring) students on their own laptop while they take exams. It uses the video camera and watches the keyboard to presumably watch whether the student tries to cheat on a timed exam. Apparently a UBC student claimed that he couldn’t get help in a timely fashion from Proctorio when he was using it (presumably with a timer going for the exam.) This led to Australian students criticizing the use of Proctorio which led to the CEO arguing that the UBC student had lied and providing a partial transcript to show that the student was answered in a timely fashion. That the CEO would post a partial transcript shows that:

  1. staff at Proctorio do have access to the logs and transcripts of student behaviour, and
  2. that they don’t have the privacy protection protocols in place to prevent the private information from being leaked.

I can’t help feeling that there is a pattern here since we also see senior politicians sometimes leaking data about citizens who criticize them. The privacy protocols may be in place, but they aren’t observed or can’t be enforced against the senior staff (who are the ones that presumably need to do the enforcing.) You also sense that the senior person feels that the critic abrogated their right to privacy by lying or misrepresenting something in their criticism.

This raises the question of whether someone who misuses or lies about a service deserves the ongoing protection of the service. Of course, we want to say that they should, but nations like the UK have stripped citizens like Shamina Begum of citizenship and thus their rights because they behaved traitorously, joining ISIS. Countries have murdered their own citizens that became terrorists without a trial. Clearly we feel that in some cases one can unilaterally remove someones rights, including the right to life, because of their behaviour.

The bad things that happen when algorithms run online shops

Smart software controls the prices and products you see when you shop online – and sometimes it can go spectacularly wrong, discovers Chris Baraniuk.

The BBC has a stroy about The bad things that happen when algorithms run online shops. The story describes how e-commerce systems designed to set prices dynamically (in comparison with someone else’s price, for example) can go wrong and end up charging customers much more than they will pay or charging them virtually nothing so the store loses money.

The story links to an instructive blog entry by Michael Eisen about how two algorithms pushed up the price on a book into the millions, Amazon’s $23,698,655.93 book about flies. The blog entry is a perfect little story about about the problems you get when you have algorithms responding iteratively to each other without any sanity checks.

MIT apologizes, permanently pulls offline huge dataset that taught AI systems to use racist, misogynistic slurs

Vinay Prabhu, chief scientist at UnifyID, a privacy startup in Silicon Valley, and Abeba Birhane, a PhD candidate at University College Dublin in Ireland, pored over the MIT database and discovered thousands of images labelled with racist slurs for Black and Asian people, and derogatory terms used to describe women. They revealed their findings in a paper undergoing peer review for the 2021 Workshop on Applications of Computer Vision conference.

Another one of those “what were they thinking when they created the dataset stories” from The Register tells about how MIT apologizes, permanently pulls offline huge dataset that taught AI systems to use racist, misogynistic slurs. The MIT Tiny Images dataset was created automatically using scripts that used the WordNet database of terms which itself held derogatory terms. Nobody thought to check either the terms taken from WordNet or the resulting images scoured from the net. As a result there are not only lots of images for which permission was not secured, but also racists, sexist, and otherwise derogatory labels on the images which in turn means that if you train an AI on these it will generate racist/sexist results.

The article also mentions a general problem with academic datasets. Companies like Facebook can afford to hire actors to pose for images and can thus secure permissions to use the images for training. Academic datasets (and some commercial ones like the Clearview AI  database) tend to be scraped and therefore will not have the explicit permission of the copyright holders or people shown. In effect, academics are resorting to mass surveillance to generate training sets. One wonders if we could crowdsource a training set by and for people?

“Excellence R Us”: university research and the fetishisation of excellence

Is “excellence” really the most efficient metric for distributing the resources available to the world’s scientists, teachers, and scholars? Does “excellence” live up to the expectations that academic communities place upon it? Is “excellence” excellent? And are we being excellent to each other in using it?

During the panel today on Journals in the digital age: penser de nouveaux modèles de publication en sciences humaines at CSDH-SCHN 2020 someone linked to an essay on  “Excellence R Us”: university research and the fetishisation of excellence in Palgrave Communications (2017). The essay does what should have been done some time ago, it questions the excellence of “excellence” as a value for everything in universities. The very overuse of “excellence” has devalued the concept. Surely much of what we do these days is “good enough” especially as our budgets are cut and cut.

The article has three major parts:

  • Rhetoric of excellence – it looks at how there is little consensus around what excellence between disciplines. Within disciplines it is negotiated and can become conservative.
  • Is “excellence” good for research – the second section argues that there is little correlation between forms of excellence review and long term metrics. They go on to outline some of the unfortunate side-effects of the push for excellence; how it can distort research and funding by promoting competition rather than collaboration. They also talk about how excellence disincentivizes replication – who wants to bother with replication if
  • Alternative narratives – the third section looks at alternative ways of distributing funding. They discuss looking at “soundness” and “capacity” as an alternatives to the winner-takes-all of excellence.

So much more could and should be addressed on this subject. I have often wondered about the effect of the success rates in grant programmes (percentage of applicants funded). When the success rate gets really low, as it is with many NEH programmes, it almost becomes a waste of time to apply and superstitions about success abound. SSHRC has healthier success rates that generally ensure that most researchers gets funded if they persist and rework their proposals.

Hypercompetition in turn leads to greater (we might even say more shameless …) attempts to perform this “excellence”, driving a circular conservatism and reification of existing power structures while harming rather than improving the qualities of the underlying activity.

Ultimately the “adjunctification” of the university, where few faculty get tenure, also leads to hypercompetition and an impoverished research environment. Getting tenure could end up being the most prestigious (and fundamental) of grants – the grant of a research career.


Google Developers Blog: Text Embedding Models Contain Bias. Here’s Why That Matters.

Human data encodes human biases by default. Being aware of this is a good start, and the conversation around how to handle it is ongoing. At Google, we are actively researching unintended bias analysis and mitigation strategies because we are committed to making products that work well for everyone. In this post, we’ll examine a few text embedding models, suggest some tools for evaluating certain forms of bias, and discuss how these issues matter when building applications.

On the Google Developvers Blog there is an interesting post on Text Embedding Models Contain Bias. Here’s Why That Matters. The post talks about a technique for using Word Embedding Association Tests (WEAT) to see compare different text embedding algorithms. The idea is to see whether groups of words like gendered words associate with positive or negative words. In the image above you can see the sentiment bias for female and male names for different techniques.

While Google is working on WEAT to try to detect and deal with bias, in our case this technique could be used to identify forms of bias in corpora.

The Viral Virus

Graph of word "test*" over time
Relative Frequency of word “test*” over time

Analyzing the Twitter Conversation Surrounding COVID-19

From Twitter I found out about this excellent visual essay on The Viral Virus by Kate Appel from May 6, 2020. Appel used Voyant to study highly retweeted tweets from January 20th to April 23rd. She divided the tweets into weeks and then used the distinctive words (tf-idf) tool to tell a story about the changing discussion about Covid-19. As you scroll down you see lists of distinctive words and supporting images. At the end she shows some of the topics gained from topic modelling. It is a remarkably simple, but effective use of Voyant.