The Man Behind Trump’s Facebook Juggernaut

Brad Parscale used social media to sway the 2016 election. He’s poised to do it again.

I just finished reading important reporting about The Man Behind Trump’s Facebook Juggernaut in the March 9th, 2020 issue of the New Yorker. The long article suggests that it wasn’t Cambridge Analytica or the Russians who swung the 2016 election. If anything had an impact it was the extensive use of social media, especially Facebook, by the Trump digital campaign under the leadership of Brad Parscale. The Clinton campaign focused on TV spots and believed they were going to win. The Trump campaign gathered lots of data, constantly tried new things, and drew on their Facebook “embed” to improve their game.

If each variation is counted as a distinct ad, then the Trump campaign, all told, ran 5.9 million Facebook ads. The Clinton campaign ran sixty-six thousand. “The Hillary campaign thought they had it in the bag, so they tried to play it safe, which meant not doing much that was new or unorthodox, especially online,” a progressive digital strategist told me. “Trump’s people knew they didn’t have it in the bag, and they never gave a shit about being safe anyway.” (p. 49)

One interesting service Facebook offered was “Lookalike Audiences” where you could upload a spotty list of information about people and Facebook would first fill it out from their data and then find you more people who are similar. This lets you expand your list of people to microtarget (and Facebook gets you paying for more targeted ads.)

The end of the article gets depressing as it recounts how little the Democrats are doing to counter or match the social media campaign for Trump which was essentially underway right after the 2016 election. One worries, by the end, that we will see a repeat.

Marantz, Andrew. (2020, March 9). “#WINNING: Brad Parscale used social media to sway the 2016 election. He’s posed to do it again.” New Yorker. Pages 44-55.

A Letter on Justice and Open Debate

The free exchange of information and ideas, the lifeblood of a liberal society, is daily becoming more constricted. While we have come to expect this on the radical right, censoriousness is also spreading more widely in our culture: an intolerance of opposing views, a vogue for public shaming and ostracism, and the tendency to dissolve complex policy issues in a blinding moral certainty. We uphold the value of robust and even caustic counter-speech from all quarters. But it is now all too common to hear calls for swift and severe retribution in response to perceived transgressions of speech and thought. More troubling still, institutional leaders, in a spirit of panicked damage control, are delivering hasty and disproportionate punishments instead of considered reforms. Editors are fired for running controversial pieces; books are withdrawn for alleged inauthenticity; journalists are barred from writing on certain topics; professors are investigated for quoting works of literature in class; a researcher is fired for circulating a peer-reviewed academic study; and the heads of organizations are ousted for what are sometimes just clumsy mistakes. 

Harper’s has published A Letter on Justice and Open Debate that is signed by all sorts of important people from Salman Rushdie, Margaret Atwood to J.K. Rowling. The letter is critical of what might be called “cancel culture.”

The letter itself has been critiqued for coming from privileged writers who don’t experience the daily silencing of racism or other forms of prejudice. See the Guardian Is free speech under threat from ‘cancel culture’? Four writers respond for different responses to the letter, both critical and supportive.

This issue doesn’t seem to me that new. We have been struggling for some time with issues around the tolerance of intolerance. There is a broad range of what is considered tolerable speech and, I think, everyone would agree that there is also intolerable speech that doesn’t merit airing and countering. The problem is knowing where the line is.

That which is missing on the internet is a sense of dialogue. Those who speechify (including me in blog posts like this) do so without entering into dialogue with anyone. We are all broadcasters; many without much of an audience. Entering into dialogue, by contrast, carries commitments to continue the dialogue, to listen, to respect and to work for resolution. In the broadcast chaos all you can do is pick the stations you will listen to and cancel the others.

Documenting the Now (and other social media tools/services)

Documenting the Now develops tools and builds community practices that support the ethical collection, use, and preservation of social media content.

I’ve been talking with the folks at MassMine (I’m on their Advisory Board) about tools that can gather information off the web and I was pointed to the Documenting the Now project that is based at the University of Maryland and the University of Virginia with support from Mellon. DocNow have developed tools and services around documenting the “now” using social media. DocNow itself is an “appraisal” tool for twitter archiving. They then have a great catalog of twitter archives they and others have gathered which looks like it would be great for teaching.

MassMine is at present a command-line tool that can gather different types of social media. They are building a web interface version that will make it easier to use and they are planning to connect it to Voyant so you can analyze results in Voyant. I’m looking forward to something easier to use than Python libraries.

Speaking of which, I found a TAGS (Twitter Archiving Google Sheet) which is a plug-in for Google Sheets that can scrape smaller amounts of Twitter. Another accessible tool is Octoparse that is designed to scrape different database driven web sites. It is commercial, but has a 14 day trial.

One of the impressive features of Documenting the Now project is that they are thinking about the ethics of scraping. They have a Social Labels set for people to indicate how data should be handled.

What is the TikTok subculture Dark Academia?

School may be out indefinitely, but on social media there’s a thriving subculture devoted to the aesthetic of all things scholarly.

The New York Times has an article answering the question, What is the TikTok subculture Dark Academia? It describes a subculture that started on tumblr and evolved on TikTok and Instagram that values a tweedy academic aesthetic. Sort of Hogwarts meets humanism. Alas, just as the aesthetics of humanities academic culture becomes a thing, it gets superseded by Goblincore or does it just fade like a pressed flower.

Now we need to start a retro Humanities Computing aesthetic.

The Viral Virus

Graph of word "test*" over time
Relative Frequency of word “test*” over time

Analyzing the Twitter Conversation Surrounding COVID-19

From Twitter I found out about this excellent visual essay on The Viral Virus by Kate Appel from May 6, 2020. Appel used Voyant to study highly retweeted tweets from January 20th to April 23rd. She divided the tweets into weeks and then used the distinctive words (tf-idf) tool to tell a story about the changing discussion about Covid-19. As you scroll down you see lists of distinctive words and supporting images. At the end she shows some of the topics gained from topic modelling. It is a remarkably simple, but effective use of Voyant.

The tech ‘solutions’ for coronavirus take the surveillance state to the next level

Neoliberalism shrinks public budgets; solutionism shrinks public imagination.

Evgeny Morozov has crisp essay in The Guardina on how The tech ‘solutions’ for coronavirus take the surveillance state to the next level. He argues that neoliberalist austerity cut our public services back in ways that now we see are endangering lives, but it is solutionism that constraining our ideas about what we can do to deal with situations. If we look for a technical solution we give up on questioning the underlying defunding of the commons.

There is nice interview between Natasha Dow Shüll Morozov on The Folly of Technological Solutionism: An Interview with Evgeny Morozov in which they talk about his book To Save Everything, Click Here: The Folly of Technological Solutionism and gamification.

Back in The Guardian, he ends his essay warning that we should focus on picking between apps – between solutions. We should get beyond solutions like apps to thinking politically.

The feast of solutionism unleashed by Covid-19 reveals the extreme dependence of the actually existing democracies on the undemocratic exercise of private power by technology platforms. Our first order of business should be to chart a post-solutionist path – one that gives the public sovereignty over digital platforms.

Covid-19 Notice on YouTube

COVID-19 Popup Notice on YouTube

When you go to YouTube now in Canada, a notice from the Public Health Agency of Canada pops up inviting you to Learn More from a reliable source. This strikes me a great way to encourage people to get their information from a reliable source rather than wallow in fake news online. This is particularly true of YouTube that is one of the facilitators of fake news.

More generally it shows an alternative way that social media platforms can fight fake news on key issues. They can work with governments to put appropriate information before people.

Further, the Learn More links to a government site with a wealth of information and links. Had it just been a short feel good message with a bit of advice, the site probably wouldn’t work to distract people towards reliable information. Instead the site has enough depth that one could spend a lot of time and get a satisfying picture. This is what one needs to fight fake news in a time of obsession – plenty of true news for the obsessed.

Here’s the File Clearview AI Has Been Keeping on Me, and Probably on You Too – VICE

We used the California Consumer Privacy Act to see what information the controversial facial recognition company has collected on me.

Anna Merlan has an important story on Vice, Here’s the File Clearview AI Has Been Keeping on Me, and Probably on You Too (Feb. 28, 2020). She used the California privacy laws to ask Clearview AI what information they kept on her and then to delete it. They asked her for a photo and proof of identification and eventually sent her a set of images and an index of where they came from. What is interesting is that they aren’t just scraping social media, they are scraping other scrapers like Insta Stalkers and various right wing sources that presumably have photos and stories about “dangerous intellectuals” like Merlan.

This bring back up the question of what is so unethical about face recognition and the storage of biometrics. We all have pictures of people in our photo collections, and Clearview AI was scraping public photos – is it then the use of the images that is the problem? Is it the recognition and search capability.

It’s the (Democracy-Poisoning) Golden Age of Free Speech

And sure, it is a golden age of free speech—if you can believe your lying eyes. Is that footage you’re watching real? Was it really filmed where and when it says it was? Is it being shared by alt-right trolls or a swarm of Russian bots? Was it maybe even generated with the help of artificial intelligence?

There have been a number of stories bemoaning what has become of free speech. Fore example, WIRED has one title, It’s the (Democracy-Poisoning) Golden Age of Free Speech by Zeynep Tufekci (Jan. 16, 2020). In it she argues that access to an audience for your speech is no longer a matter of getting into centralized media, it is now a matter of getting attention. The world’s attention is managed by a very small number of platforms (Facebook, Google and Twitter) using algorithms that maximize their profits by keeping us engaged so they can sell our attention for targeted ads.

Continue reading It’s the (Democracy-Poisoning) Golden Age of Free Speech

Facebook to Pay $550 Million to Settle Facial Recognition Suit

It was another black mark on the privacy record of the social network, which also reported its quarterly earnings.

The New York Times has a story on how Facebook to Pay $550 Million to Settle Facial Recognition Suit (Natasha Singer and Mike Isaac, Jan. 29, 2020.) The Illinois case has to do with Facebook’s face recognition technology that was part of Tag Suggestions that would suggest names for people in photos. Apparently in Illinois it is illegal to harvest biometric data without consent. The Biometric Information Privacy Act (BIPA) passed in 2008 “guards against the unlawful collection and storing of biometric information.” (Wikipedia entry)

BIPA suggests a possible answer to the question of what is unethical about face recognition. While I realize that a law is not ethics (and vice versa) BIPA hints at one of the ways we can try to unpack the ethics of face recognition. The position suggested by BIPA would go something like this:

  • Face recognition is dependent on biometric data which is extracted from an image or in other form of scan.
  • To collect and store biometric data one needs the consent of the person whose data is collected.
  • The data has to be securely stored.
  • The data has to be destroyed in a timely manner.
  • If there is consent, secure storage, and timely deletion of the data, then the system/service can be said to not be unethical.

There are a number of interesting points to be made about this position. First, it is not the gathering, storing and providing access to images of people that is at issue. Face recognition is an ethical issue because biometric data about a person is being extracted, stored and used. Thus Google Image Search is not an issue as they are storing data about whole images while FB stores information about the face of individual people (along with associated information.)

This raises issues about the nature of biometric data. What is the line between a portrait (image) and biometric information? Would gathering biographical data about a person become biometric at some point if it contained descriptions of their person?

Second, my reading is that a service like Clearview AI could also be sued if they scrape images of people in Illinois and extract biometric data. This could provide an answer to the question of what is ethically wrong about the Clearview AI service. (See my previous blog entry on this.)

Third, I think there is a missing further condition that should be specified, names that the company gathering the biometric data should identify the purpose for which they are gathering it when seeking consent and limit their use of the data to the identified uses. When they no longer need the data for the identified use, they should destroy it. This is essentially part of the PIPA principle of Limiting Use, Disclosure and Retention. It is assumed that if one is to delete data in a timely fashion there will be some usage criteria that determine timeliness, but that isn’t always the case. Sometimes it is just the passage of time.

Of course, the value of data mining is often in the unanticipated uses of data like biometric data. Unanticipated uses are, by definition, not uses that were identified when seeking consent, unless the consent was so broad as to be meaningless.

No doubt more issues will occur to me.