Debunking arguments, conversational norms, and Twitter

Matthew McKeever
8 min readApr 24, 2021

A range of interesting arguments in philosophy are debunking arguments. Such arguments attempt to cause us to question our faith in what we believe or take ourselves to know by pointing out that our beliefs are the outcomes of processes that might not be truth-tracking. A paradigm such argument might, for example, point to a cherished moral belief of ours, which we take to be foundational, and suggest that its compellingness for us can only be the result of the fact that thinking in that way helped out ancestors navigate their environment. That we believe something is evidence that the belief-forming mechanism responsible for it helps us evolutionarily, not that it tracks truth.

A related argument is genealogical. On one way of understanding it, we need to take into account the fact that we are somewhere, somewhen, and subject to a host of cultural and other influences when assessing why we believe what we believe. I believe that gay marriage, for example, is morally fine, and would like to think that it’s because my culture and I have hit upon the truth of that fact. But can I honestly say, had I been born 50 years ago, or in a (ubiquitously, not just mostly, as is the actual case) homophobic country, I would still have believed it? The genealogical debunker suggests no, and that this should inspire what Amia Srinivasan calls ‘genealogical anxiety’ (see this both for the phrase, the argument, and some references for all I’ve just said).

Yet a third species of arguments also serves to undermine our faith in our representational abilities, but it does so by attending to the limits of and constraints on our language. A very simple such thing is known — in fancy-speak — as the Sapir Whorf hypotheses, but more mundanely rests on the observation that our languages in part determine what we can think, and our languages are in part determined by our environment. Hence the fact that proverbially Eskimos have 17 words for snow, and thus presumably the capacity to differentiate snow much more finely than us (I think this is false, and linguists don’t care for Sapir Whorf these days, but it’s a neat story and captures the very rough outline of a range of philosophical thinking). We are representationally lacking with regards to snow, and once we realize that, then where does it end? Can we have faith that for any cherished language-expressed belief, there isn’t someone standing above us as the Eskimo does when we express our crappy snow beliefs, seeing and speaking much more clearly about everything we care about? These arguments, then, in one way or another, undermine our representational and linguistic capacities. They give us reason to doubt the deliverances of reason.

There is, sadly, no copyright free image of the philosopher Paul Grice, discussed below, online. Here’s a reconstruction of an extinct swine called ‘Grice’. Image credit: Northerner, licence

It seems like a good time for such arguments: some debunking would be good. We’re in an epistemically fraught time, with social media a hotbed of extremely held beliefs and conspiracy theories. A natural question to ask is whether these sorts of arguments can shed light on our condition. If they could, it would be reassuring. We would like to debunk conspiracy theories or the terrible state of online ‘Discourse’. Our world would seem, to me at least, much less strange if I could understand, say, pizzagate: if I could make sense of it, I would feel much more at home.

In that spirit, I want to present here, drawing on some foundational thoughts in contemporary thought about language, as well as the nature of social media, a debunking argument that might serve to explain how extreme views thrive on social media.

Let me foreshadow the argument. What is it to say tell somebody something? Well, we can note that it’s a bi- or multi-polar phenomenon. Minimally, there’s the speaker and the hearer(s). The speaker subjects themselves to certain norms and standards in the aim of impressing on their audience certain things. My basic claim is that this audience-centric norm we impose upon ourselves, when combined with the random audience-creation mechanism of Twitter (namely that one follows people having seen their tweets retweeted), can explain why extreme views flourish.

Before doing that, let me say a bit more about what I mean by ‘extremity’ and cognates. I make the charitable assumption that most people are humans like us. They believe many things; they attempt to justify much of what they believe; they are probably, despite their best interests, inconsistent in ways big or small. Maybe they like Ayn Rand but they also like stimulus checks, but if challenged, they would try to defend the unlikely pairing. We contain multitudes, are epistemically flawed, but also aren’t all complete bullshitters (in the technical sense). I hope this is a recognisable picture, something we could at least hope of our fellow person; and let’s call such people rational.

And call extremity the denial of all this. The extremist is more or less single-minded in defending one or a set of ideas; they eschew inconsistency; they have a lax attitude towards truth.

Assume — what is meant to be the fact that inspires this whole post, not something I’ll present evidence for — that extremity wins out on social media. Extremists succeed on social media: their ideas get shared more successfully than people like us.

This should be a puzzle. After all, I’m assuming that at an individual level we are rational, at a group level irrational, and yet the group just is, in a sense, a group of individuals. How we do get from individual rationality to group irrationality?

And here’s my answer. Social media operates according to certain norms of conversation, which are analogous to the norms of conversation that hold between normal face-to-face conversation. And it’s these norms that lead to the group irrationality. Just as evolution and culture shapes what we (can) believe, and language — well, according to some — how we view the world, so, I’ll claim, the norms of social media conversation engender extremity. In order to make that point, though, I need to briefly discuss some thinking about norms.

Relevance

Good communicators communicate things relevant to the people they’re talking to. This is a fact common sense recognizes and theory explains. But the retweet mechanism makes it somewhat out of one’s control whom one talks to, and so, to abide by the norms of communication within a system with an audience-determining medium like retweets (or something similar), it can so happen that you are obliged and indeed rewarded for tweeting nonsense.

The philosopher Paul Grice is famous for his conversational maxims: these are sets of rules or norms that govern the course of good conversation (see here, and those familiar will realize I’m slurring over details or taking presentational liberties). You should express yourself clearly, should say true things, shouldn’t use too many words, and so on. The one I’m particularly interested in is relevance: the idea that your conversational contribution should be fitting for the context at hand. If I am talking about Putin to my Russian friend, I shouldn’t start taking about cod; and vice versa.

What’s particularly interesting is that relevance is plausibly an audience-dependent concept. What is relevant depends on who you are speaking to. Let’s express that as so:

Social Media Relevance Principle. A tweet is relevant provided it speaks to the interests of your audiences.

With this one highly intuitive principle we can, I think, show how the rational person turns into an extremist. Had I had more time, it would have been cool to try to show this a bit more formally using simulations, but I don’t, so this sketch will have to do.

Assume a tweeter starts tweeting, and somehow develops a minimal audience. Assume moreover that this initial stage involves audience heterogeneity: the audience consists of people with different views, perhaps inconsistent, with some regard for truth: the multitudinous non-bullshitter from above. Assume, moreover, that the person believes each of p, q, and r, equally strongly. They tweet, more or less at random, one of their opinions: say p. Nothing: nobody pays attention. They tweet q. Again, their small audience doesn’t pay attention. Getting ready to shut down the app having barely started (the fate of many), they give it one last shot: r.

R hits a chord. It gets retweeted. Because retweets show a tweet to one’s followers’ audiences, this tweet gets a bigger audience. It gets retweeted some more, and some people, on the basis of the tweet of r, come to follow the person.

But now notice: the speaker’s audience is considerably more r-interested than p- or q- interested. It’s more or less a fluke that this is so, but nevertheless it is so. And once we get past this initialization phase, both the norms of communication, but also the path to extremism kicks in.

Because think about it: how should they go on tweeting? If they want to comply with relevance, then given they know a majority of their audience is r-interested, they need to tweet something at least consistent with r. It’s like talking to your friend into Russian politics about Russian politics rather than cod. So you do. Then, because your audience is by hypothesis r-friendly, they are likely to be friendly to things consistent with r. So more retweets, and more of an r- audience, and so it becomes yet more relevant to tweet r- consistent things. And so, from believing p, q, and r, equally, and being equally likely to tweet about them (and assuming p and q aren’t r-friendly, if not r-inconsistent), it comes to be that, in order to acquiesce with the dynamic audience, shaped by retweeting, of social media platforms, you need to hone in on what has previously proved popular, becoming more or less single-minded. Being a cooperative communicator can lead to extremity.

Of course, often this won’t be a problem. If I tweet philosophy memes and get followers, then I’ll tweet some more memes, and we’ll all have a good time. But what if I tweet something marginal or dodgy, the opinion I have that is most cancel-worthy, you know [opinion redacted]. Assume it’s some terrible conspiracy theory. The platform will be indifferent: the same logic of retweets and relevance could radicalise me in my marginal opinion, and from rationality an extremist might be born.

Of course, no one forces you to do this. But selection effects will favour good — i.e., relevant — communicators. If I unfortunately express [opinion redacted] and get a bunch of deplorable [opinion redacted]-liking people as my followers, I’ll probably turn away from Twitter, and hope my career is in tact. But provided enough people aren’t so moved, then we can see why communicatively rational people may tend towards extremism on social media. It’s a familiar fact that ‘the algorithm’ can lead to extremity by a sort of snowball effect; if what I say is right, the rationality of communication can do the same. This is more scary — the irrationality is coming from inside the house — but also heartening, because our audiences, or the platforms we inhabit, is something more under our control than the youtube algorithm.

--

--