The Uncontrollability of Conceptual Engineering: A New Argument
Conceptual engineering is an approach to philosophy, which, according to one formulation, is founded on the ideas that: any language we care to imagine will be defective; that this defectiveness can be a source of epistemic, moral, and other sorts of harms; and that accordingly we should try to improve language to remove its defects. On of its recent popularizers, Herman Cappelen (whose master argument I have attempted to distill in my first sentence), nevertheless suggests reasons for pessimism about the prospects of conceptual engineering.
His pessimism is founded on the observation that changing language is hard, uncontrollable, and unpredictable. For any language, there are millions of speakers, but we can’t control their usage to make it align with our preferences. And while language does change, and change — sometimes — for the better, it does not seem to do so in accordance with easily intelligible rules that would give us a blueprint for introducing new improved languages. Why is it on the margins of acceptability to say [quasi-slur deleted] and not acceptable to say [similar quasi-slur deleted]?
In his elaboration of this pessimism, Cappelen brings to bear the tools of contemporary philosophy of language: he gives us a justification for thinking that, according to our best theories of the meaning of a language, language change will be out of control. As such, others have responded by questioning his use of the tools, suggesting ways out for the engineer. My aim here is to present another route to Cappelen’s core claim that language change is very unpredictable, and that accordingly would-be conceptual engineers should be wary about trying to improve language, because they won’t be able to foresee what their efforts will lead to.
Even if you don’t care, though, about internecine debates in contemporary philosophy, you should still care about the topic of this post. You should agree with the master argument sketched above, and should want to change language (I try to convince you of this here; in accordance with laziness, I have no other references in this piece. If you want them, ask me, or ask google). And so you should be interested to learn about what the process of changing language might be like, and the problems it might involve.
Before going on to those problems, though, a word on methodology. Some contemporary analytic philosophy has taken a turn towards matters of social and political importance. For example, my field is philosophy of language, and increasingly people are interested, not in technical details about how language or communication functions, but about the ways languages can be used for malign purposes. People study slurs, propaganda, bullshit, dogwhistles, and so on, and there is a recognition, missing from the previous couple of generations of analytic philosophy of language, that language is an arena in which social power is wielded, and that philosophers can usefully theorize about this using tools partly handed down to us from those previous generations
Along with this — welcome, in my view — turn towards the real world, there has also been a widening of methodology, of the toolbox. Analytic philosophers formerly ignored disciplines like critical theory, but now, it is not uncommon to come across mentions of, say, Foucault, in the above work. That’s of course understandable: if you care about power, you should read Foucault.
But the methodology hasn’t widened enough, in my view. If we care about the social and political effects of language, then while it’s all very well to read Foucault, we should also check whether, you know, the social sciences have anything to tell us about the things we are interested in.
This post aims to explore the viability of this wider methodology with a case study. In particular, I want to consider whether economics has anything to tell us about language use, and in particular about language choice in the context of conceptual engineering. To do that, I need to discuss, to the best of my very meager ability, the relevant economics.
Consider the following model of how we know how much of a given thing to make (I follow — basically copy from — this text throughout here; it explains it much better than I do). A number of us value some good, say cheese pizza. Let’s assume we can assign a dollar value to how much we value it (actually, let’s assume they assign it a value in cents, between 0 and 100 cents, where to assign the latter value is to value something as much as it’s possible to value something). Assume we rank people in decreasing order of how much they value the good with a function f(x) from people’s rank to the value they assign it (for mathematical niceness, which you can ignore, assume the domain of the fiction is the interval [0,1], and assume everybody has a rank, there’s a person for every rank, and no two people share a rank). So the person at rank 0 will value it 1 (cent; this will be someone who loves bread and cheese), and the person at rank 1 will value it 0 (a celiac allergic to nightshades). That is to say, f(x)=(1-x). Finally, assume there’s some mixed cost to making the pizza that gets reflected in its price p, also denominated in cents. It’s intuitive then that any and only those x will buy it for whom f(x)>=p, which we can represent as so, where the blue line plots f(x) and the red line is p, and it looks so mediocre because of incompetence on my part (a much nicer version is on p513 of the work I’m slavishly copying):
So far, a lot of words for not much. You’ll buy something if it’s cheap enough; so will others, and that will determine how much a producer should produce of it.
Things get much more interesting, and relevant for our purposes, when we move to other sorts of goods. Consider a fax machine, for example, to use the classic example from the literature. A fax machine is notably different from a pizza, because the value to you of the fax machine depends on how many others are using it. If lots of people use faxes, you value it a lot. At the extreme, if no one else uses it, its value to you might literally be zero. This is different to pizza: others consuming pizza doesn’t affect your enjoyment of it.
We can model this by changing our equation somewhat. To take into consideration that the value of a good increases as more people use it in the simplest possible way, we can assume that the value of a good increases by the number of people who use it, so we can say that the value of a good subject to network effects is f(x)*x, which is to say (1-x)x. From now one, we’ll call our network goods function, f, and not our non-networked goods function from earlier.
But now when you graph this, it looks more interesting:
Remember for the pizza, we can find where the horizontal line on the price axis intersects the demand, and because of how we set things up (with higher values of x linearly corresponding to lower values of the good), that gave us the equilibrium quantity of the good that should be produced.
It’s not so simple here. Assume, like any good, that our fax machine costs money to make and so has a selling price p. There are various possible values for it. Here’s one:
The good is too expensive: no one buys it. But it might be cheaper. We can imagine:
Now this is interesting! The line intersects the curve at two points. But the price line intersecting the curve is supposed to give us the equilibrium quantity. Does that mean there are two equilibrium quantities for network goods?
Yes. And this will be very significant when we come to considering conceptual engineering, as we’ll treat an ameliorated language as a network good whose price is the effort it takes to learn.
But return to faxes, and we can see this makes sense. Imagine two people, Faxy and Foney. Faxy loves the idea of a fax machine, because they’re a hipster who likes the idea of seeing their correspondents’ handwriting, but don’t want to go the post office. Foney doesn’t — Foney is all about hearing the voice of their friends.
If fax machines didn’t have network effects, then Foney would be much less likely to buy the good than Faxy (though they might do were it super cheap). But it’s a different story because faxes do have network effects. In particular, imagine telling Faxy this: okay, so the fax machine isn’t super popular. 15% of your friends will use it. And imagine it costs $d. They might agree to buy it, if they’re really super into seeing their friends’ handwriting, even if it’s just a small proportion of them, such that they value that >d. And imagine telling Foney this: look, we know you don’t care for faxes. But 85% of your friends are using them. It’s pretty much the best way to keep in contact. It’s $d. Again, we can imagine Foney agreeing, because, even though it’s not their preferred method, it’s the one their community has chosen, so that also value it higher than d. So there’s two different outcomes: a small collection of keen early adopters who really value the good, and a much larger collection of people, some of whom are to a large extent in it just because others are.
So then it seems that a price of $d dollars, if it determines one, two different equilibrium quantities: the low one, where just the fax lovers buy in, and the high one, where others buy in because of the ubiquity of faxes (in fact, there’s at least one more, namely when x is around 0; if no one uses a fax machine, it is worthless, and nobody will want one).
And mathematically, of course, this makes sense: since the price is a function of the intrinsic liking of the good, as well as how many others like it, we can reach the same price with high intrinsic likers and low users or vice versa.
Still, a question arises: why should we think some one particular price, the one we sketched in red in the above diagram, constitutes an equilibrium? Well, it doesn’t necessarily — we saw it could be priced too high to intersect the curve at all.
But if this isn’t so, as we go along the list, we’ll find someone for whom the value of the good, that is to say their intrinsic liking for it, boosted by the value added if everybody up to them in the list also used it, was equal to the price of the good. If there is, we can intersect the parabola to determine a price, and then we can follow the horizontal line to find the high equilibrium. If there’s a low one, there will be a high one.
This is interesting in itself, but these equilibria have even more interesting properties. Imagine we’ve got a pair of equilibria for people x’, x’’. Then for all ys between x’ and x’’, y should want to buy the good, because for those ys, f(y)>p (it’s easier to just look at the diagram to see this). This will push demand upwards. So everybody between the low and high equilibrium will buy it — if it so happens that a fraction of the population greater than the low equilibrium x’ but lower than the high one x’’ buys the good, we will end up at the high equilibrium. Any point on the curve above the price line, dropped down, determines a non-equilibrium quantity such that, if exactly that amount of people buy it, there will be pressure to move towards the high equilibrium. If I haven’t explained this very well, staring at the graph, and thinking what exactly it means, might help.
But there’s more. Imagine that our society’s purchasing overshoots the high equilibrium, so that x’’+n buy it. Then inspection of the graph reveals that f(x’’+n)<p, so they will sell, and move the quantity of the population owning the good back to x’’. Similarly, imagine the ys in the interval [0,x], i.e. between 0 and the lower equilibrium. For all those people, f(y)<p (this is again grokable by looking at the graph), so anyone who has bought it will regret doing so, pushing pressure down towards the 0 equilibrium.
And now we reach the main point. Consider our low equilibrium x’. It is very touchy. If slightly fewer than x’ buy it, we’ll move towards the 0 equilibrium. If more than x’ buy it, we’ll move towards the high equilibrium, x’’. x’ serves as a sort of tipping point: if it so happens that slightly above that quantity buys the good, we’ll move to the higher equilibrium. If slightly less than that quantity buys it, we’ll move to the lower one. The behaviour of the good, then, is very unpredictable if roughly x of the population buy the good. Knowing only that x+-n bought it, we have no idea whether the product will tend towards ubiquity or will tend towards non-use. The quantity demanded is volatile and sensitive around x’, and its success depends on clearing x’.
Back To Conceptual Engineering
I hope that that is at least slightly intuitively clear. If it’s not, just think again about network goods with which you are familiar. Your decision to move to a new social media site, or videoconferencing platform, or whatever, will in part be determined by what you take its intrinsic value to be, but also by what you anticipate its value to others will be. And we know, from the success and failure of social media sites (and related things) that it can often seem chaotic and unpredictable. The economics of network goods helps get a handle on this feature of our lives.
But I think it can also help us get a handle on conceptual engineering. In particular, I think it provides us with a quasi-formulation of Cappelen’s thought that conceptual engineering is unpredictable and uncontrollable. A language is an archetypal network good: the more people use it, the more valuable it is to each of its users. We can imagine a group of ameliorators getting together, assessing alternative languages. They value them differently inherently: maybe a feminist values a language that expresses feminist values more clearly, an AI person a language according to which artificial intelligence counts as intelligence simpliciter, and so on.
And we can model the ‘price’ for the community learning to speak the new language as the effort to learn it, to teach people about it, produce materials in it, and so on.
What should an ameliorator’s attitude be towards this process of assessing language? If the foregoing is right, they should be wary. In particular, they should be very concerned that their community will latch onto the language they think is best. The feminist philosopher should look at the AI person’s language and worry: ugh, that’s a terrible language. But those guys over there are super keen. And I know, if there’s enough of them to reach a first equilibrium, then it’ll be even in my best interests to speak the language; after all, I want to talk to my community. And the AI person will think exactly the same.
What they’ll thereby by giving expression to is the thought that whether or not a language is successfully introduced is hostage to some strange facts about how network goods become prevalent, facts that seem to float free of the inherent value of any given good. In light of these strange facts, any would-be ameliorator should be concerned that their ameliorative project will be hostage to the strange luck of network goods, and take seriously the lack of control they have over whether they can change language use.