Don’t talk to people like yourself

Matthew McKeever
10 min readAug 1, 2021

(I was lesswrong-lurking recently and came across this which puts the point below concisely and thought I’d reup and rename a non-concise version of the argument. 25/07/23.)

What’s the point of conversation? I don’t pose this in a misanthropic, #introvertproblems sort of way, but as a question that’s been of interest to philosophers and linguists of all sorts for a decent amount of time. Here, I want to explore one famous framework for thinking about it, present a problem for that framework, and suggest a solution. The hope in so doing is that — in addition to being of intrinsic interest — if we can understand how conversation works, then we can understand why and when conversation fails to work, a goal that should seem pretty obviously desirable in an era of propaganda, dogwhistles, code-words, and so on. More particularly, the goal here will be to use these somewhat recondite ideas in philosophy of language to present an argument for the epistemic value of engaging with diverse voices.

Conversation, per a famous paper by MIT philosopher Robert Stalnaker, consists of at least two central moving parts: a common ground, and assertions which update it (the original paper is here; there’s mountains of secondary literature — particularly interesting, and in very roughly the same sort of vein as this post, are here and here). The common ground is a set of beliefs that are common knowledge between the participants in the conversation. Getting clear about common knowledge is tricky (what I just said is wrong, because one can have known falsehoods in the common ground, say if you’re just supposing something), and I won’t try to do so here. But examples help: in this paragraph you’re reading, I’m assuming there’s a certain common ground between you and I. For example, I’m assuming that you know that there are distinct disciplines of philosophy and linguistics, that you know what propaganda et al. are, and why they’re important, that you are au fait with internet talk (such as hashtags). But there are also some things I’m clearly not assuming as common. If I were presenting this idea to a philosopher friend, I wouldn’t mention Stalnaker’s affiliation, or the fact that it’s important, or the fact that common knowledge is hard to think about. I would simply assume my hearer knows all that. That would have an advantage of brevity: almost literally everything in this post up to now could be streamlined in such a case, as I could say ‘Hey David! I have an idea about Stalnakerian conversation — wonder what you think’, taking advantage of our rich common ground. On the other hand, all the above would be of zero epistemic benefit to David — he knows it all already. By contrast — I hope! — it is of some epistemic benefit to you. This tension between the two roles that common ground plays — as a source of brevity and smooth conversation when rich but an opportunity for knowledge when poor — is the central tension I’m concerned with.

So conversation occurs against a background common ground of mutual knowledge, which can be greater or less. And to say that the knowledge is common is to say at least that both you and I know it. Then we get a very neat account of what we do in conversation: we update the common ground by asserting things (assertion is then the second Stalnakerian tool). Again abstracting from details, we attempt to add to the beliefs that form our common ground. Changing example, if we’re talking about the pandemic in our country, I might add ‘In region R they’re almost out of beds’, having just read it. You’ll accept it (maybe first you’ll ask how I know, challenge it if you’ve read otherwise, etc.), and it’ll come to be part of our common ground. So far, I hope, so straightforward.

We need one more set of ideas to get my story on the table. Paul Grice famously introduced conversational maxims into his theory of linguistic communication (e.g. here). These are sort of (breakable) rules we tend to abide by in conversation. We try (i) to say true things (I tell you Stalnaker came up with the ideas of conversation-qua-common-ground update rather than that, say, Chomsky did); (ii) to say relevant things (I don’t, in so telling, tell you about the time I had lunch with him in St Andrews (though I did and it was cool)); (iii) to convey appropriately-sized chunks of information (I withheld a lot of detail not necessary for us); (iv) and finally, to speak in an appropriate manner (I tried to explain things clearly and without taking up thousands of words, using big words, swearing, etc.).

Call these last two things — that we try to speak briefly and relevantly — conversational quality (this is a bad choice of terminology on my part, but I feel stuck with it now). We try to have quality conversations in this sense: to speak in a nice clear manner, giving just the information we need. Especially relevant is the idea of brevity for me: in a well-functioning conversation among equals, there should be a back-and-forth (of course, not every conversation is between equals, an important limitation of the idea developed here), and it shouldn’t be the case that one person just talks at the others. Now here is a claim that should seem plausible:

Conversational quality increases as the size of the common ground increases

Hopefully, this seems right. Again, think of my example. I could explain to my friend David the idea of this post, quite literally, in two sentences. Explaining it to someone who doesn’t have a handful of degrees in philosophy takes a little longer. Whatever you do, this is probably true for you. There’s some domain where conversations flow much more smoothly because you’re on a par with your conversational partner, and can rely on a rich common ground.
(A vivid example: people like to talk about people coming in from the wild or waking up from a coma to the pandemic world. But really try to imagine quite how difficult it would be to have such conversations. Super hard! It would be so in part because a gigantic bit of common ground is missing, and getting the newly de-comaed person and your common grounds synched it would take a while, and you’d be met with a lot of disbelief.)

The second idea we need is what I’ll call the expected lesson of a conversation. This is just some terminology I made up as a label, and it roughly tracks the former two norms of conversation Grice talked about: that we should say true and relevant things. We learn things in conversations when our interlocutors abide by these norms — we gain in knowledge. Now here is our second principle

The expected lesson of a conversation decreases as the common ground increases.

Why is this so? An example will help. Consider the following limiting case:

Two speakers, l and r, and their common ground. Representation from l’s perspective

Remember the common ground is got by taking your belief set, your conversational partner’s, and adding something to the common ground only if it’s in both. Assuming we have some sort of grasp of the common ground (which we must, but how extensive is up for grabs), we can therefore compare your belief set with the beliefs that are common ground. Now imagine, per impossible, you could actually do this: you compare them, and see they completely overlap. That’s represented above. That means that everything you believe, your conversation partner also believes. Accordingly you know that your conversational partner has nothing to learn from you — they already know everything you do! In such a situation, say that the expected lesson for them is 0. That is to say, in cases in which the common ground is large (relative to one (here l) of the participants’ belief sets) the expected lesson of the conversation, for the other participant (r) , will be small.

Now imagine you’re participant l. In considering whether to enter into such a conversation, you realize you’ll be able to be a good quality interlocutor, but you have nothing to say: you have literally no lesson to give! But the first maxims tell one to say truthful, relevant things. Here, you have none! So you can’t abide by the lesson, precisely because you’re maximally well-placed to abide by quality. The greater the proportion of your beliefs that are also common beliefs, the smoother conversation will be for you, but the less you’ll be able to add, to tell the other person.

Consider another case:

This is basically the opposite. You know that you (potentially) have things to say (e, f, g). But by the same token, you know conversation will be harder. Imagine that g is something really juicy and important, but can’t be explained without e and f. Then you’ll have to firstly explain e and then f. And that might take a while (returning to one of our analogies, what I’m saying right now is my ‘g’, and it’s taken me an afternoon, 1,500 words and two pictures to say it.)

In intermediate cases, the same thing applies. If you imagine considering which conversations to enter into, you can optimize for quality, in which case you’ll decrease the expected lesson, and thus the epistemic worth of the conversation. Or, you can optimize for knowledge, in which case you’ll decrease quality, and might find your conversation lapsing into a monologue.

(Another possibility is that people are non-cooperative: they try to exploit conversation to gain knowledge, and don’t care whether they also add knowledge. They’re leeches. But a community of leeches won’t work, for market-for-lemons-esque reasons: at t0, the community will contain high-knowledge and low-knowledge people, assumed to have roughly the same common grounds. The low-knowledge people will accost those whom they assess as being high-knowledge. The high-knowledge people will not benefit from those conversations, so will leave the community, and so the community will eventually just consist of low-knowledge people and the expected lesson of every conversation will be low.)

So what?

You might reasonably think: so what? What this shows, if it’s right, is that there’s a trade-off to be made. You want to do as best you can on both the quality and the lesson metric: you can’t max out both, but you can have some of each. It’s not different from the fact that you can’t eat all the chips and all the ice cream: you have to resort to eating some of each. In this case, you’ll need to solve an equation that finds maxima for quality and lesson. Again, maybe a picture will help:

The quality increases as the expected lesson decreases — you want to find how to get as much of both as you can.

You might even think this is obvious or trivial. Of course you need to have someone close enough in worldview from you to differ but different enough to have something to add. Well, okay: but if it’s obvious, it’s good to be reminded of obvious things (it wasn’t obvious to me as of yesterday afternoon).

But I think nevertheless think teasing out these implications brings into focus a couple of interesting things. The first is that a conversational strategy aimed purely at maximising knowledge, which one might think of as desirable for truth-seeking rationalist-type people, is in fact a bad way to go. The way to learn most is to talk to someone with whom you have nothing in common, but those will be very hard conversations. What you should do is aim for a middle-ground: someone distant enough to have things to add, but not so far away as to be very hard to understand.

The second thing is that I think a puzzle still lingers. We each of us spend most of our time with people with whom we have very rich common grounds. But if rich common grounds decreases the expected lesson, maybe that’s not what we should be doing. That, perhaps, is something that has relevance for our contemporary (allegedly) echo-bubble-filled moment.

(Initially, this post was going to be about the puzzle that arises when experts get together. On the view developed here, this will lead to a rich common ground, thus a good conversation quality, but a poor expected lesson. Then why do we do it? And the reason I was going to push was that we should think of these sorts of conversations as a sort of arbitrage. To arbitrage in finance is to exploit differences in goods priced differently but which should be priced the same — the same stock, say, on two different markets. The effect of arbitrage is to level differences. I was going to suggest that the goal of expert conversation was exactly this sort of difference-levelling. In particular, given that our knowledge of the common ground is very fallible, it can make rational sense to enter into conversation with people whom one takes to have almost identical common grounds, because you’ll be mistaken about what exactly they know that you don’t know, and so the aim of conversation in such cases would be to luck upon the things that aren’t common ground but should be, and thus to serve to level each participants’ belief state. I still like this idea, but obviously considerably less than I started, since I’ve relegated it to this parenthesis.)

To return to the main thread: if something like the above is indeed an anti-echo-chamber view, it’s also something like the converse: it’s an epistemic argument for diversity. There’s a lot of talk these days about things like diversity — of expanding the range of voices we hear. A standard reason for doing so is moral: people have been deprived a voice for too long. But the considerations here suggest a straightforwardly epistemic reason for diversity: by participating in conversations with less rich common grounds, you open up the possibility of receiving better epistemic lessons. It’s also a consequence of the idea here that these conversations might not go smoothly, but that’s the price to pay to listen to new voices.

--

--

Matthew McKeever
Matthew McKeever

Written by Matthew McKeever

Novella "Coming From Nothing" at @zer0books (bitly.com/cfnextract). Academic philosophy at: http://mipmckeever.weebly.com/

No responses yet