LLMs for philosophers (and philosophy for LLMs)

Matthew McKeever
22 min readDec 10, 2022

(Edit: as some people are reading this, I figured I’d share some of my academic work on the topic — this is about how to think of meaning when it comes to LLMs. The title is missing, it’s anonymous, and I put it through ChatGPT to help anonymous review but feel free to share, give feedback, etc.)

With the release of ChatGPT it feels like public awareness of the potential of large language models (LLMs) has reached a new high. And interacting with it naturally raises old and deep questions about what we’re actually seeing when we ask for a biblical verse explaining how to remove a sandwich from a VCR or get it to write code for us. Are we faced with an intelligent machine?

Many answer no. The errors LLMs can be easily led into, their lack of connection to the world beyond having read the internet, for two, mean that ‘intelligence’ is not the right description.

We can assess that claim in many ways (and engaging with the critics, although a worthy task, is not the aim of this post). Writing as a philosopher of language, what I’m interested in, and will discuss here, is whether we can learn anything about how language works from attending to LLMs. My overall argument is the following: the recent development of LLM architecture, from the basics of word2vec as capturing a sort of meaning in a computationally tractable form, through the unreasonable effectiveness of recurrent neural networks as text generators and up to the attention and self-attention that underlies the powerful transformer architecture, has parallels in the discipline known as formal semantics. That latter discipline, currently sitting awkwardly at the intersection of philosophy and linguistics, itself has its word2vec (Fregean denotational semantics); its generative mechanism (the lambda calculus applied to interpreted syntactic trees); and even its move towards recognizing the dependence of linguistic context on meaning (dynamic semantics and DRT, among other things.)

I draw conclusions from the parallels between the two disciplines — LLM text generation and natural language formal semantics. If you buy the parallels claim I’ll make, I’ll claim, you should also buy:

(1) One should increase one’s confidence that the methods underlying LLMs are tracking something corresponding to linguistic reality

(2) That ideas from formal semantics could help in the development of LLMs.

That’s all teeth-grindingly abstract and jargony. I don’t presuppose any knowledge of anything mentioned above of the reader. Rather, I assume an interested general reader who perhaps knows neither the detail of LLM architecture or formal semantics but who is interested. I will explain all the above as best I can (which is limited in some cases), and indeed a main goal is simply this expository one: to put some core ideas of LLM design in terms that one interested in the deep questions things like ChatGPT make salient can understand. The subsidiary goal is to present the argument about parallelism, and thus to recommend formal semantics and philosophy more generally as a discipline apt for thinking about these things.

Bright idea 1: word vectorization

Open up ChatGPT and ask it whatever you like. It will respond with some text, always grammatical, often coherent, and sometimes true. Texts are made up of words, and texts’ meanings are made up of words’ meanings. So somewhere in ChatGPT there must be something, one might think, like a dictionary: an assignment of meanings to words (or, at least some primitives: perhaps morphemes, perhaps something else, but that’s one of many subtleties I’ll ignore throughout).

But what is meaning? It’s a baffling question. One doesn’t run into these things, meanings, in the world. Are they abstract object like numbers? Do we find them in the brain? Suddenly, what was most familiar — words have meanings — becomes puzzling. And if we’re puzzled, there’s little chance of us being able to program a computer to deal in meaning.

Thankfully, by just attending to the phenomena and not getting lost in the big questions, we can get a foothold into meaning, and we can use that foothold to generate approaches to modelling meaning that bear fruit. We can do so by noting two facts:

Fact 1. Words bear relations to each other. ‘Run’ is related to ‘runs’; ‘goose’ is related to ‘geese’. ‘Realize’ is related to ‘real’. Our dictionary would be very unwieldy (to the point of being impossibilised for highly analytic or agglunative languages) if it didn’t systematically capture these facts and gave a separate entry for each word form.

Meanings bear relations to each other. ‘country’ bears a relation to ‘place’: the former is a type of the latter. ‘Blue’ and ‘red’ are of the same type. The meaning of ‘Berlin’ is, somehow, related to the meaning of ‘Germany’. And so on.

Fact 2. Words are often complex. The word ‘winning’ talks about an action as being ongoing; depending on how one differentiates words, it might be both an adjective and something like a verb ‘The winning shot’ vs ‘She is winning’) We see this more clearly with other languages: the Russian ona vyigryvala, which could be translated as she was winning, packs into the verb both gender and aspect.

Words’ meanings are often complex. The famous (if you’re a philosopher) example: a bachelor is a man and unmarried. ‘Queen’ means something like regal and female. ‘Coloured’ is red or green or … .

A theory of language should perhaps take account of this fact: that words are syntactically and semantically (i.e. from the point of view of, roughly, grammar and of meaning) complex, and related to each other.

(Incidentally, as the above makes clear, I’m going to be, throughout, super sloppy about use vs mention: about when I’m talking about words or words’ meaning.)

A version of what’s called the distributional hypothesis can help us in one fell swoop here. According to it, words’ syntactic and semantic features are features of the contexts in which they appear. The more two words appear in similar contexts, the closer they are in meaning. Thus both ‘goose’ and ‘geese’ will co-occur with things like ‘honking’, ‘eating bread’, ‘pond’, and so on. ‘Goose’ and ‘duck’ will co-occur in very many contexts but will differ when it comes to contexts including ‘fly’, or contexts including ‘gaggle’, and many others. ‘Goose’ and ‘bear’ will co-occur with things like ‘moves’, ‘weighs’, and so on. Changing examples, ‘geese’ and ‘bears’ will co-occur with plural verbs; they will occur sentence initially and ‘bare’ in argument position of verbs, and so on.

Extremely naively, and purely for illustrative purposes, we could try to capture these facts about distribution as a theory of meaning of expressions. We say that words are sets of features, which are not binary but continuum valued. Sticking with this extremely simple example, we might have

“Goose”: Goosiness: 1. Plural: 0.
“Duck”: Goosiness: 0.6. Plural: 0
“Geese”: Goosiness: 1. Plural: 1
“Bears”: Goosiness: 0.1. Plural:1

(Why is the goosiness of bears 0.1? It’s a perhaps unhelpful artefact of my presentation. In a full treatment, we wouldn’t use goosiness as a primitive but would instead have features like flies, eats bread, forms a gaggle, moves, occupies space, hangs around at ponds, and so on. Bears would score well on weigh something and occupies space; since we’re treating goosiness as a conjunction of all these, it gets at least some goosy points.)

If we could make a theory like that, it seems we could capture our facts. And do so we can. Here’s how. We take a large corpus of sentences. We then let an algorithm work out features of words, so that it spits out entries like the above for all the words in our vocabulary. The algorithm succeeds if it manages to assign to similar words similar feature bundles.

And the algorithms do succeed. I won’t get into the details too much, but we generate a n-dimensional vector where n are the (many!) dimensions of the feature space for each word. We can then use vector math to compute similarities between words’ meanings.

This leads to some famous results. One can add and subtract and multiply vectors, and performing these operations on the vectors algorithms give us captures semantic relations. We can take the ‘king’ vector, subtract the ‘male’ vector, and what we’ll get, roughly, is the ‘queen’ vector. It’s pretty cool.

This might sound like magic or nonsense; if so, follow this link and you’ll be convinced by going through an example in some detail. The key point for us is that we have here a viable and — importantly — computationally tractable foundational theory of meaning.

Bright idea 2: reference and compositional semantics.

Vectorization, in one form or another, is how we deal with meaning in LLMs. What about in humans? Do we walk around doing vector math? Do our brains contain a bunch of numbers?

While the answer isn’t necessarily no, the tradition known as formal semantics, which arose in philosophy and can now be found in some linguistics departments, proposes a possible other answer.

As I understand it, if the core idea of the vectorization approach is concerned with similarity between words, the core idea of the compositional approach concerns parthood relations relating complex expressions and the parts they are made up of.

Consider, to use an example straight from Frege, one of the founders of the field:

  • Caesar conquered Gaul

As with the above, there are both syntactic and semantic things to say. We can note, at least somewhat pre-theoretically, that the sentence appears to have a syntactic structure: it is not merely a concatenation of words. Compare:

1 Caear and his army conquered Gaul
2 Caear conquered Gaul and Britain
3 Caear conquered and subdued Gaul
4 Caear conquered Gaul and invaded Britain

?Caear conquered and Octavius liberated Gaul

(One might disagree with the ? and note that aptly placed commas make this more acceptable. But consider “Caesar thought about and Marius escaped from Gaul”. That is surely not so good, and will suffice for my point.)

The acceptability of 4 and the dubiousness of ? give some reason to think that the sentence has hidden structure and should be parsed

  • Caesar {conquered Gaul}

Nothing fancy is meant: just that ‘conquered Gaul’ seems to go together, in a way that ‘Caesar conquered’ doesn’t, as the conjunction data suggests (but only suggests).

So expressions bear interesting non-obvious syntactic properties, and syntactic theory can tell you as sophisticated and detailed a theory about that as you care for. They also bear semantic properties. Most obviously: what the sentence means is somehow dependent on what its parts mean. It’s something to do with what ‘Caesar’, ‘conquered’, and ‘Gaul’ mean that make the sentence mean what it does.

But what is this dependence? Consider another example, not a sentence this time:

  • The mother of Caesar

If you don’t pay too much attention to what I just said, there’s at least some reason to think the form of this is:

{The mother of} Caear

And you can note that there’s the same dependence of the meaning of the whole on that of its parts. This idea of dependence is often glossed as compositionality, viewed as a fundamental constraint on a theory of meaning: an expression’s meaning is determined by its parts’ meaning and how they’re combined. This whole section is explaining compositionality.

We can continue our analysis. It’s intuitive that the whole stands for an object in the world: for Aurelia. (Let’s ignore her long deadness).

Moreover, this analysis seems reasonable: the expression ‘the mother of’ is sort of a machine that takes in objects (or rather words standing for objects) and gives you back that object’s mother.

We reach a deep and important distinction. Some bits of language are machines, and others the things you put into machines. And just as in the vector approach, this view is computationally tractable. A machine put in fancier words is a mathematical function. Some complex meanings are the result of applying the function to an argument. What the machines are and what the arguments are is determined by the syntactic structure of the sentence analysis reveals.

If that is so, then we have a Big Picture Theory:

BPT: The meaning of a complex expression is the result of applying the function part to the argument part.

This isn’t the whole story. But it’s remarkable how far we can get with it. Return to our example. We need to find a machine and an argument. If ‘Caear’ stood for the object before, maybe it does again? So ‘conquered Gaul’ would be the machine or function.

But note that, unlike ‘the mother of Caesar’, ‘Caesar conquered Gaul’ doesn’t stand for an object. So what it does stand for?

The provisional answer — and one I profoundly apologize to my students for the badness of, and ask them to just trust me for a second, please — is that it stands for this weird new thing that Frege called a Truth value. A truth value is an abstract object like a number, and every sentence stands either for the true or false. So every true sentence stands for the same thing.

That’s obviously a terrible result, but we can fix it (a bit; but we won’t here); in the meantime, let’s see why we might like it.

We might like it because we can reason as follows. ‘Caesar’ stands for an object. The sentence for a truth value. The rest of the sentence must stand for a function. And we know its input and output so we know what function it must be: that one that maps an object to True provided it conquered Gaul.

(An aside: this approach really shows its power when we come to to sentences like

  • Everyone conquered Gaul

And this approach, when combined with the idea of syntactic analysis, really really shows its power when we consider sentences like

  • Caear castigated everbody.

The analysis of the latter would take us way of course. But think about the former. You know the rules: one part must be a function, one an argument. There is no rule about which part of the sentence must be which; there is no restriction on the nature of arguments (functions themselves can be arguments). Can you think of the meaning of ‘everybody’?)

Why this matters

A fundamental question about language is how we manage to understand it. A language has an infinite number of sentences, so we haven’t encountered all of them. Infants learn words incredibly quickly. We need some sort of mechanism to generate the syntactically well-formed sentences of our language, and that mechanism must be a finite thing capable of generating an infinite number of sentences. Formal semantics says we want the same thing for meaning, and the function theoretic semantics promises to give us it.

Generation in LLMs: unreasonable effectiveness

It’s helpful to divide a theory of language into a theory of primitives and a theory of complexes. We’ve just seen for natural language semantics, the two ideas are so intertwined they are best understood together. This isn’t quite so for NLP. In the section above I gave the theory of primitives (vectorization). Now I’ll give a theory of complexes, which is to say a theory of how one can generate sentences using a finite program.

In fact, my goals here will be modest. I won’t do anything like give a full-dress account of how ChatGPT generates its sentences, both because it would take ages and because some of the technical details are beyond me. Instead, I’ll summarise some famous work popularised by a fantastic and famous blog by Andrej Karpathy about how to generate sentences relatively close to English via simple methods.

Imagine walking in a deserted place and coming across the following (in some form):

Naturalism and decision for the majority of Arab countries’ capitalide was grounded by the Irish language by [[John Clair]], [[An Imperial Japanese Revolt]], associated with Guangzham’s sovereignty. His generals were the powerful ruler of the Portugal in the [[Protestant Immineners]], which could be said to be directly in Cantonese Communication, which followed a ceremony and set inspired prison, training. The emperor travelled back to [[Antioch, Perth, October 25|21]] to note, the Kingdom of Costa Rica, unsuccessful fashioned the [[Thrales]], [[Cynth’s Dajoard]], known
in western [[Scotland]], near Italy to the conquest of India with the conflict.
Copyright was the succession of independence in the slop of Syrian influence that was a famous German movement based on a more popular servicious, non-doctrinal and sexual power post. Many governments recognize the military housing of the [[Civil Liberalization and Infantry Resolution 265 National Party in Hungary]], that is sympathetic to be to the [[Punjab Resolution]](PJS)[http://www.humah.yahoo.com/guardian.
cfm/7754800786d17551963s89.htm Official economics Adjoint for the Nazism, Montgomery was swear to advance to the resources for those Socialism’s rule, was starting to signing a major tripad of aid exile.]]

It’s nonsense, certainly, but it’s Englishish, or rather English-wikipedia-markdownish nonsense, at least to some extent. The words are English words; many of the complex expressions are grammatically sound; even the wiki markdown is right.

You’d be confused, I take it, encountering this. It doesn’t look like the sort of garbled English children, or beginning learners, or people with language processing issues, produce. But it looks like that kind of thing, the kind malformed English, of which children and others are producers.

The big and unsurprising reveal is that it was generated by a computer. The more interesting reveal is that it was generated by a computer with no conception of the English language, or indeed language at all, and it was so generated by a method that is reliable (it wasn’t a fluke) that is comparably data and compute undemanding. It is an example of what Karpathy calls the unreasonable effectiveness of recurrent neural networks. For our purpose, the things to note is the success of the generation mechanism (the analogy between natural language gets a bit strained here: the RNN we’re dealing with doesn’t take as input vectorized words. You need to trust me that an RNN that did take vectorized words, using the same sort of method, could produce even more impressive results.) So let’s say a little about the mechanism.

Neural networks compute functions. Say we want to recommend movie to users. We want a function recommend(user) which returns, say, an array of possible movies. We could try and write it ourselves:

function recommend(user)

{
for x in films:
for y in user[“liked_genres”]:
if y in x[“genre”]:
movies.push(x)

return movies
}

But at a minimum that requires that we know what genres a user likes. And that fact is liable to be an extremely messy one. My film tastes are pathetically bro-y: Lynch, The Big Short. But I also really like silly comedies like Mr Bean, movies I happened to see as a young teenager, movies with Andy Samberg, and so on. I watch the silly comedies predominantly around Christmas with my family; the others, on Sundays when I have more time. The function mapping me to movies isn’t simple.

And so it goes for most everything. We can overcome this by using a neural network to do its best to work out the function in question. For a simple such case, we label movies with features: box office take, awards, cost, release, subtitles, and so on. Assuming we can treat them all as scalar, we do so, perhaps converting the features so they have the same range (maybe -1 to 1). We then test the importance of features and combinations of features by seeing how much they influence whether or not I watched a movie in the past. We train the system on my history. We might learn that being released in 1990 and being a box office smash is a good predictor for a movie I might watch. We might learn that winning awards and being subtitled and (not costing much to make or not making much money) is similarly a good predictor. We do this ‘learning’ by assigning random values to the various features and combinations thereof and finding the best set of values for each feature and combination, which we can conceive of as simply a factor by which we multiple the features/combinations, that best predicts my tastes. And this finding a best set of values is an iterative process involving going back and forward, fine-tuning the model, and testing it again until we cease improvement.

That’s massively simplified, of course. But it is the rough idea for at least a class of simple neural networks, ones that have produced super-human capabilities for classifying, among other things.

Language models tend to work with a variant of neural networks better suited to the particular task of text generation. One fundamental observation that the tech must capture is that sentences and other complex linguistic expressions are sequences. If one is to deal with language, one’s networks must be apt for sequential data.

A particular sort of neural network, recurrent neural network, are better suited to this task than standard neural networks; more recent developments such as tranformers and the attention mechanisms they use are even better.

I would like to explain to you how exactly RNNs work, but alas lack the technical skills. So instead I’ll explain the problems they can solve and gesture at how they solve them.

Imagine we want to do this: given a sequence of characters, predict the next one.

Consider these two sequences:

t-h-e-_-w-i-d-t-h-?

h-i-t-t-i-n-g-_-t-h-?

What would you predict as the next character? You have a good guess, right? The first is likely to continue with a space followed by ‘o’ as the word ‘of’ is the best fit. The second is likely to continue ‘e’ as ‘the’ is the most likely fit. (Another possible continuation is ‘s’; that could be continued by ‘s’ if the sentence were something like ‘We took the frogs and measured their widths.’)

At the risk of stating the very obvious, we can’t simply determine the next character based on the previous one, nor on the previous two (and if I were cleverer, I could think of better examples showing how the past n characters won’t determine the next (that is, by presenting two words that share their first n characters but differ in the n+1 one).

ChatGPT wasn’t much cleverer when it came to illustrating my above point.

Moreover, the example shows that the process of guessing can continue. I went ahead and continued: I suggested that the guess following space was ‘o’, then following ‘the width o’ was ‘f’, and so on. The process is iterative.

So if we want to generate text, character-by-character, we want both to attend to the context preceding the most recent text and we want to build up a prediction in steps.

It is hopefully possible to squintingly see how one might do this: we just look at a buttload of text and generate conditional probabilities of a character c occurring given a preceding sequence s, and then we generate the text using the most probable continuations, then generate the next step using the previous generation, and so on.

Recurrent neural networks enable us to do this because they take as input the previous generation, the current final token, and generate a probability distribution for the next token. RNNs differ from standard neural networks in the fact they take as input previous outputs; for more than that, alas, you need to read someone cleverer than me!

However, my cleverness isn’t a problem, because the important thing isn’t the details (at least, for my purposes in this post). Rather, it’s the simple and fundamental observation, stated here for character generation but applying equally to word generation or sequence generation, that one can’t process a bit of language without attending to the context it occurs in. This leads us nicely back to human natural language semantics.

The Dynamic Turn

The Fregean formal semantic theory I began introducing above developed over the course of the 20th century; luminaries are too many to mention, but once Richard Montague had paved the way for treating natural language as fully an object for the theoretical tools Frege introduced for natural language, people like David Kaplan, Saul Kripke, Angelika Kratzer, Hans Kamp, and Barbara Partee provided sophisticated analyses of environmental context sensitivity, modalities (‘must’, ‘necessarily’), and time (‘now’ and tenses). Extending the idea of referential semantics to pretty much every part of speech, a theory of natural language meaning was developed that could cover a lot of data while remaining sensitive to the demand of specifying a finite mechanism to determine an infinite set of sentences.

Around the end of the 70s, though, a tricky set of problems arose concerning pronouns. These problems led to various schismatic theories of natural language semantics that attempt to modify the Fregean framework to centre the fact that linguistic meaning fundamentally depends on linguistic context. This is important for me because this recognition, I claim, mirrors the recognition first recurrence and later attention were introduced to capture; the human language and the machine language people have been scaling the same hill from different sides, I claim.

But back to the problem. In the machine learning literature, the fact that language depends on linguistic context (hereafter, to save typing, simply ‘context’) is sometimes called long-distance dependence. Consider:

  • She, having panickedly started the car by turning the broken key with a wrench, and waved goodbye to Alex, rolled up ?

A reasonable next word prediction, one that many corpora would surely suggest, is ‘her’ and then ‘window’. But what does ‘her’ mean? Well, we know — it means the unnamed subject of the sentence. But to know that we have to look quite a way back, to the start of the sentence. Next word prediction is going to need to model long-distance dependencies.

But the semantics literature makes evident problems can still arise if the distance isn’t long. Consider:

  • A man entered. He took a seat.

‘He’ depends on ‘A man’; but the distance isn’t particularly long. (For reasons I can’t get into, the distance can be made shorter: the problem I’m about to describe famously arises for the pronoun and its antecedent in ‘Any farmer who owns a donkey beats it’.)

The problem is, roughly, that the influence of a noun phrase is very limited: it can only control the interpretation of expressions ‘close enough’ to it. One way of not being close enough is by being in a different sentence; another is the more subtle phenomenon shown by donkey anaphora of not being c-commanded by.

Classical natural language semantics ignores context in at least this sense: expressions too far from their meaning-giving antecedents (and this applies to other phenomena such as tense which exhibit similar dependencies) can’t get their meaning from them.

More generally, classical semantics tended to operate on a sentence or expression-based level. But examples like the above tend to suggest that meaning is a property of discourses — bodies of sentences — as a whole, and not parts of discourses. Repeating myself, we can read recent developments in machine learning as making the same recognition.

In response to that, people like Irene Heim and Hans Kamp developed semantic theories able to capture this (seeming) fact about meaning. Very roughly, the idea is that a discourse builds up a set of constraints of how the world must be and what it must contain and it’s these discourse-level constraints that are what we semantically evaluate, as opposed to individual sentences considered in isolation. Very roughly, in our discourse the first sentence introduces the constraint that

x: Man(x).

The second sentence then adds information to

x: Man (x), sat_down(x)

The key intuition is that we need constantly to be updating something given by the discourse, and need to bake deep into the semantics the fact that pronouns like ‘he’ can’t be given a meaning and and of themselves. These dynamic theories are both varied and controversial, potentially offering solutions to whole swathes of parts of language.

Attention Is All You Need

Many of the eye-catching models of today attempt to capture contextual dependency not with recurrent neural networks but with an idea called attention and self-attention. With the same caveat above that my technical knowledge isn’t what it ought to be, here’s what I take the idea to be.

Consider this English-Russian sentence pair

  • She was winning. ona vyigryvala.

Background: Russian encodes gender, tense, and aspect in verbs. English doesn’t. Russian does have gerunds, or at least expressions one would translate as -ing words (in fact, it’ll have very many for each one English word). Trying to do a word-word translation would produce the sort of nonsense you maybe remember from online translation years as recently as 6 or so years ago.

Rather, and a bit picturesquely, imagine we’ve so far translated ‘she’ as ‘ona’, and we’re trying to translate ‘winning’. One thing to note is that we don’t just want to convey the notion of winning (I refer to the activity of suceeding in some vaguely competitive sense); nor only the imperfective aspect attached to the English word. That doesn’t distinguish it from:

She is winning
She will be winning.

Rather, it seems we want our translation to convey winning [activity]+imperfective+past. But more. Because Russian past tense encodes female gender, we also want to encode gender.

So when you think about it, when we’re trying to get the word ‘vyigryvala’ as a translation for ‘winning’, we need to attend not only to ‘winning’ but also to ‘was’ and even to ‘she’: the whole sentence! Self-attention lets us to do this. Roughly, when encoding the English sentence, and we’re at the word ‘winning’, we have a mechanism that looks over the previous parts of the sentence and, where relevant, encodes the meaning of those previous parts into the encoding of the current word ‘winning’, so that as input to the translation process is something like a vector conveying winning+imperfective+she+past. And such a vector will be a pretty perfect match, although I’ve skipped so many details, for our Russian word.

If I’m wrong about the details, correct me! The key point is this. The dynamic turn in semantics was premised on the observation that to get an adequate account of meaning we need to look back and develop representations on a discourse level; it is arguable that the attention-mechanism is another way of capturing this. For dynamic semanticists ‘he’ is going to be man+sat_down; for transformers, ‘winning’ is going to be win+past+imperfective+she.

So what?

With that odyssey over, I can finally make the points I want to make. They are two. The first is this: I claim that the development of natural language semantics has mirrored the development of natural language processing. Each contain a method for assigning primitive meanings (reference vs vectorization); each a generation mechanism (compositional function application vs recurrence-type methods combing large texts for probabilities), and each, perhaps most interestingly, an increasing representation of the importance of linguistic context, up to and including the positing of representational items that ‘pack in’ the meaning of several expressions into one unit, as fundamental to semantics.

This similarity is inherently interesting, but more important is the consequence: if you buy my story, then you should be open to the possibility that the formal semantic tradition, and perhaps philosophy of language in the analytic tradition more broadly, could tell us things about natural language processing in deep learning.

What would that look like? I will end by suggesting what I think is a promising possibility. Now that the (extremely hard!) problem of generating grammatically perfect, fluent, and meaningful language appears to be solved, critics are focusing on the large unsolved problems. Salient here is ‘hallucinations’: that, as critics like Emily Bender and Gary Marcus say, LLMs just make shit up. Meta’s Galactica was asked to describe the Streep-Seinfeld theorem, and alas it proceeded to generate plausible nonsense that, people argue, could be the equivalent of a ddos on academia were it used incautiously.

But I wonder whether tools from philosophy could help. One might argue that what’s gone wrong is that current systems don’t have the means to adequately represent objects, and can’t distinguish the real from the fictional. As it so happens, philosophers have written a ton about how the main devices of objectual reference work, and have also done interesting work, now more speculative (Strawson), now more empirical (Burge) on how objectual reference to an external world comes about, and why it’s important. Strawson, for example, thinks that one can’t have a concept of mind-independent reality without a conception of a world of particular enduring objects. It could be that the same applies to artificial entities.

Of course, this is the most wishywashy of promissory notes and shouldn’t convince anyone. But the meta point, I hope, stands: the seemingly parallel development of the two approaches to meaning surveyed here should encourage the thought that there can be more fruitful interaction between both sides than there has been up to now.

--

--