Measuring the impact of Russian social media propaganda
Here’s a question: what is the causal impact of political propaganda? What difference does it make?
One thing is clear: it’s thought to make a big impact. If we allow that political ads, newspaper articles, and art can at least sometimes contain propaganda then the fact that all through history we’ve tried to control who gets to make them suggests we think that access is important; that we worry about elections being bought means we think they can be bought, and that ultimately seems to me to mean we think by throwing money at TV or newspapers or Facebook ads we can shape the electorate.
At the same time, one might reason as follows: political propaganda is a sort of advertising, and the evidence for advertising’s effectiveness is pretty dodgy (I don’t know this literature and am instead taking this as folk wisdom but see e.g. here). If the silicon valley people spending billions can be wrong about the foundations of their business, can’t politicians be as well?
You might have this objection: the question as posed so far is too vague. We need to know what political propaganda amounts to to be able to even think about the issue. I agree, so will sketch a preliminary way of thinking about it; the hope is that the body of my post will help speak to the nature of propaganda. I myself in previous posts have suggested two possible models for propaganda; it will help to rehearse them, both because I think it’s inherently interesting but also because it will help explain the below.
Qualitative vs Quantitative Models of Propaganda
Think of some examples of propaganda. What comes first to your mind? If you’re like me, to be truistic, it’s memorable things. Propaganda works by getting just the right image or phrase to stick in your head.
In a TV program about Brexit, for example, there’s a scene in which the Brexiteers are agonizing over the wording of their slogan “take back control”, with the mastermind Cummings eventually hitting on just the phraseology, one that a commentator went as far as to say is “the reason why the United Kingdom voted to leave the European Union”.
But examples abound. The fight over ‘pro-life’ vs ‘pro-choice’, for example, shows how important it is to us that our side have ‘pro’ attached to our view. As George Lakoff famously pointed out, it was a coup on the GOP’s part to frame low taxation as ‘tax relief’ as if taxation were an affliction to be relieved of.
That suggests a view: propaganda as art. It’s concerned with getting the mot juste to deliver its message. And we can look to theorists for an explanation for why this is. Jason Stanley, whom I think as a main defender of this sort of view, has the idea that propaganda is a sort of discourse that short-circuits rational discourse and uses sneaky and from a linguistic perspective somewhat technical devices to do so.
Central to his view is the distinction between at-issue and not-at-issue content. The former is what one says; the latter is trickier, it is content not quite asserted but nevertheless associated with what one says. A classic example is something like
- Have you stopped picking up dogs and punting them in the Irish sea?
What does one say to that? If one says ‘yes’, you seem to have committed to prior dog-punting. If one says ‘no’, one seems committed to one’s dog-punting being on going. Neither answer seems good.
Technically, a question like that presupposes that you have picked up dogs and punted them in the past; to answer it, one needs to accept the presupposition, which is accordingly a not-at-issue content associated with it.
More redolent examples come from slurs, something both important to Stanley and to this post. Consider:
- He’s not a taig
‘Taig’ is (or at least was when I was at school) a derogatory name for a Northern Irish Catholic. It’s somewhat offensive. And note this interesting thing: as in the above example, it’s hard to respond to. If one accepts what’s said, that’s obviously no good. But neither does it work to deny what the person says: ‘no he isn’t’ seems to presuppose that there are taigs, something one doesn’t want to presuppose.
Stanley’s view has it that much more of language than we might like to admit has these slurring effects, and so these ideas are crucial to understanding political speech.
The point is, it’s complicated. Slurs are a kind of hot topic in contemporary philosophy of language and there are a bunch of sophisticated theories about them. They exhibit interesting behaviours that are hard to model.
Is propaganda like this? Is it sneaky by exploiting tricky aspects of language like presuppositions and ‘not-at-issue content’? An alternative — and my tentatively preferred view — says no. On my view, propaganda is about quantity not quality. Any political message can become propaganda by simply being repeated enough; we don’t need fancy theories. We just need time and attention (which is to say, money).
I’ve very tentatively defended this view here before. In what follows, I want to explore the topic by considering our leading question: what impact does propaganda have?
Last week, I came across something rather interesting. As part of my ongoing project of analysing online Russian propaganda, I used a basic deep-learning method to find associations between words in the output of leading Russian propagandist Vladimir Solovyov. In particular, the algorithm suggested that associated with ‘Zelensky’ in Solovyov’s Telegram output (and I should note ‘output’ includes reposts) was the Russian word клоун, which, per google, means both ‘clown’ and ‘funnyman’: Solovyov’s patterns of use suggested a tight link between Zelensky and ‘clown’.
This arguably and partially supports the qualitative model, one might think. The ‘association’ above is meant to be something like semantic similarity’; ‘clown’ is clearly pejorative, and so if ‘Zelensky’ and ‘clown’ are somewhat interchangeble, given Solovyov is a central propagandist, it suggests pejoratives are important for propaganda.
But recall our question: what is the impact on the audience of propagandistic discourse? Last week I was really speaking about the speaker-side of the equation, but propaganda is first and foremost something that should have effects, and so we need to try and model those effects. My leading question this week was whether the data and methods available to me could offer insight into this.
One of the most exciting possibilities the internet allows for is making broadcasting bidirectional: we can learn much more about the audience than previously we could. It so turns out that using the same tools — the api Pyrogram for the messaging app Telegram — we can see how an audience reacts to particular messages.
Well, ish. Unfortunately doing so in a sophisticated way is either beyond me or impossible, but we can do so in an unsophisticated way, and test the efficacy of political propaganda, thus potentially helping to support of refute one or another theory.
Here’s what I did. I took the last month of Solovyov’s messages, amounting to 6000. I found the ones in which ‘clown’ appears (25 messages) and made a list. I found the ones in which what philosophers would call the ‘neutral counterpart’ of ‘clown’, namely ‘Zelensky’ appears (185 messages), and listed them. Then I compared their average total reactions to the average total reactions for all messages across the whole month. Here’s what I found. First, compare ‘clown’ and the whole month:
Averge number of reactions for messages containing клоун (i.e. ‘clown’/’showman’ in Russian): 7383.708333333333
Average number of reactions for messages: 6445.3515
Interesting! ‘Clown’ messages do seem, at least for this small sample, to outperform the average. Here’s the big and fascinating question: is this outperformance a way to model the causal contribution of propaganda?
Even if it is, problems occur when we note the average number of reactions for messages containing ‘Zelensky’:
Average number of reactions for messages containing (the Russian for) ‘Zelensky’: 7925.07650273224
It’s more than for ‘clown’! If these data are reliable, and reactions are a window into causality, then if anything propandistic terms lessen impact. At least they certainly don’t seem to increase it.
Let me just reiterate I’m under no illusions concerning the force of the above. The ‘Zelensky’ and ‘clown’ sets are noticeably different; there are all sorts of factors that could impact the averages (perhaps the ‘clown’ messages are disproportionately, but purely by chance, during peak audience hours). And probably much more.
But I still think this is exciting and suggestive. All our attention is drawn, seemingly both as people and as theorists, to the salient exciting cases of political propaganda: the outrageous or clever. But perhaps that’s wrong, and we should look more to the subtle and ubiquitous messages that we encounter. The intriguing question for next time is to ask if one can track subtlety in data.