AI Doom and the nature of prediction

Matthew McKeever
10 min readMay 27, 2023

Some predict that superintelligent AI will command all the world’s resources, including us, and will use those resources — including, unfortunately for us, our atoms — to further its goals. This atom-borrowing will lead to the death of everyone, these people think. Call this prediction doom.

What I want to do here is consider the merits of doom as a prediction. We can begin to assess it, I think, if we understand what predictions are, how good we’re at them, past successes and past failures, and so on. Notably, we can do all this without considering the object-level facts: the facts or lack thereof about whether we are, in fact, going to die at AI hands. Instead, we can ask whether it’s reasonable to predict we’re going to die.

You might think this is a distinction without a difference: it’s reasonable to predict we’re going to die just in case we’re going to die. But that’s wrong. The lottery numbers last night were 13,54,67,1,2,19. Someone who predicted they were going to be the numbers (say, because that string of digits was their first phone number) was right, but wasn’t reasonable. This is because, in philosophers’ jargon, something can be true without being justified: without being reasonable. For a prediction (for any claim, really) to be reasonable is for there to be good reasons for it. I’m going to explore whether there could be good reasons for the doom prediction. I’m not going to explore whether there are such reasons, not least because the proponents of the doom view are oddly squirrely about what those reasons actually are, secreting them in blogs and refusing to elaborate in a neat canonical form.

An analogy might help understand my aim: I might wonder whether I should eat the contents of a bag. But even before looking in the bag, if I know it’s made of asbestos, I know I shouldn’t eat the contents on which the asbestos will have rubbed. I want to see if the predictive bag doom comes in is asbestosy, and thus whether or not to feast on the doomy prediction contained within.

Alright, so: predictions. Is there a class of predictions that human history has revealed to be particularly accurate or particularly inaccurate? Let’s run through some options.

What are we like with sci-fi predictions?

That superintelligent artificial intelligence is going to kill us sounds like it’s from sci-fi. Is that a reason to disbelieve a prediction of it?

I can see both sides. On the one, it seems ridiculous. Does it really seem likely that our existence is going to end as if it were a bad sci-fi novel? No, you might think, and so any prediction saying it will is wrong. Things are mostly grey people called Martin changing tax policy, commuting, eating starches, and rearing children; they are not wild alien super geniuses entities killing 8 billion people.

But that’s not really an argument, yet. And you might note people say that sci-fi actually has a not bad track record of predicting the future. Moreover, it’s not as if ridiculous things don’t happen — Donald Trump is named like a Pynchon character and wouldn’t be out of place in certain brutal corners of American postmodern fiction. ChatGPT is already pretty science-fictiony, and it’s a fact. Weird things happen. So given Trump, LLMs, and the supposed success of previous sci-fi predictions, maybe a prediction’s being sci-fi-sounding doesn’t ipso facto make it bad?

But recall our distinction between truth and justification. Even if weird things happen and some sci-fi has predicted things, it doesn’t do enough to justify us in giving credence to a given sci-fi prediction until we know how many sci-fi predictions failed to materialize. Although there are Trumps and LLMSs, there are many many more boring things, like Olaf Scholz and my apartment’s air conditioner. It could be that though sci-fi style things happen, we are never in a position to justifiedly predict them, because the number of sci-fi things that don’t happen is much bigger, so something’s being sci-fi speaks against its being true.

So, I think the very basic: it’s weird, so we should be dubious, goes a long way, but isn’t decisive.

Predictions of existential significance are untrustworthy

If doom is right, it’s probably the most important event in human history. Should we or should we not believe that the most important event in human history is predictable?

I encourage you to think about the question. I don’t really know what to think. But here are some preliminaries. Firstly, in defense of yes: we should believe it because it is straightforwardly true because climate change is probably the most important event in human history and it’s predictable. Extrapolating a bit, climate change shows we can have justified predictions about matters of existential significance.

At the same time, excluding climate change, how, in general, do we do at such important predictions? You might think: not great! The salient examples in our history of what would be analogues to AI takeover — plausibly things like farming, the industrial revolution, and the information age — were, as far as I can see, not predicted. Nor, scaling down dramatically, was fall of the Soviet Union or the rise of China or Trump or Brexit or, or ...

It could be we’re generally lousy in this domain. This would make sense. As I understand it, there are some people who are really good at predictions. But to be good at predictions involves being a little bit right a lot of the time, rather than massively and spectacularly right. (If we think of the game of predictions as something like a market, we should expect this, just as we expect the way to make money trading is by exploiting lots of small blips in prices rather than making sweeping calls.) Things of existential significance are going to be events outside the range of typical superforecasters, thus perhaps not the subject of good forecasts. Maybe.

Existentially important predictions believed by a fringe minority are untrustworthy.

The reason we feel okay with climate change predictions is because there’s a large-scale science-backed consensus. If there wasn’t such a consensus, we wouldn’t feel good. So here: in light of the fact that most disagree with the doomers, we should disregard them. Yay or nay?

On the nay side: we might think that we can divide and conquer to show that we shouldn’t disregard. Climate change shows us that we can predict existential significance; things like the precession of Mercury in general relativity shows us individuals can predict, and putting that together we get that individuals can predict matters of significance.

Sed contra, and here’s what I think is the heart of the matter: if we heed my above advice to actually look at examples, things start to look bad for the doomers. What do you think of when you think of existentially important predictions going against consensus?

I think of two: Malthus on population and Keynes on automation. The former predicted that given the rate at which the population was growing we’d run out of land and thus of food. He was wrong because he failed to take into consideration technological innovation that made the same amount of land able to feed more people. Keynes thought, given this and related innovations, that come roughly now we’d be able to generate enough food by working a mere few hours, and given that possibility we would, and so we’d all be living lives of leisure. He was wrong,

Both of these meet our criteria; the former is arguably one of the most famous failed predictions in history, and while the latter is less so, it shows how unbelievably complicated predicting these things are. Not only do we have to account for resources and their distribution (land and its carrying capacity) but we have to account for difficult ideological or axiological facts like that the 20th century wasn’t a fruitful soil for anti-capitalist ideas.

Of course, two examples don’t make a case. I leave it open for people to disabuse me of my ignorance. But here’s where this chain of thought ends for me: there is a class of thing, Malthus-type predictions, typified by being of existential significance and held by a minority (and for a priori reasons). Objects belonging to that class aren’t justified. Doom belongs to that class. So doom isn’t justified.

Having made the argument against doom, I now consider an intriguing in defense of the view that it’s a reasonable prediction.

Two Facts About Prediction

Consider

Case 1. There’s a ball in the room, from holes in the walls of which, at all angles, speeds, and pressures, comes jet-powered air. Beside the holes are sort of peg things that look like door handles, on one which is written ‘exit’. The air comes with enough force to lift the ball off the ground and fling it, and the ball is flung to and fro. The many jets interact chaotically. Your task is to predict where the body will be at some time in the future.

Case 2. As above, but the ball is a person.

Case 1. Obviously!

Case 2 is much easier to predict. Knowing that people don’t like being buffeted around like that, and guessing the person can read, you can guess: once it sees the ‘exit’ sign it will endeavour, when near the exit peg, to grip onto it and hold for dear life. And so, you can make a tentative prediction that that’s where you’ll find the person. Not definite, but reasonable: and much better than you could guess where the air-tossed ball would end up.

So, we have this weird difference: Case 2 is easier to predict than Case 1, despite the fact that the only difference between them is that we’ve replaced an object for a much more complicated one (ball for person). A general lesson: sometimes more complicated objects facilitate easier predictions. They do so, in particular, if there is a science or discipline that has laws that govern those more complicated objects directly

Now here’s a thought: maybe our superhuman future is to our current state as Case 2 is to Case 1. It contains a new and more complicated entity, but the laws that entity is subject to makes its behaviour, in the future case, easier to predict.

Let me just repeat the main idea: a given system can be unpredictable from one perspective but easily predictable from another. We are such: trying to predict our actions from a physics-perspective is essentially impossible; from a psychological perspective, sometimes easy. Then, if that’s our future, then maybe superintelligence would be easy to predict using the superhuman perspective even if impossible to predict from ours. And so maybe doom becomes more likely.

That’s obviously a big maybe, and I, and I don’t think anyone, has done the work to show that there is such a superintelligence perspective. But it enables one to perhaps understand the baffling feature of doomers, namely their confidence. It’s tempting to think if they thought their prediction was a Malthus-esque one, there’s no way they’d be so confident. So maybe they don’t, and they’re working in a framework in which one can be extremely confident about complicated systems by virtue of there being laws for those systems.

And here’s fact two. Consider three games of chess: between two people who don’t know the rules; who know the rules but nothing else; and two somewhat keen players. And try to predict the board layout after four moves.

The thing to note is that again prediction ease increases as a certain other quantity increases: in the case above it was complexity, here it is intelligence.

(For the record I take any talk of intelligence as a unified thing with a grain of salt. Maybe it can be replaced for these purposes with something like being good at doing stuff. As far as I know, and embarrassingly, the whole literature on artificial intelligence lacks anything good to say about its second word.)

To see this, note that for the players who don’t know the rules, basically any board configuration is possible. For the second pair, we can have some confidence that pieces are only on places they could legally have gotten to in four moves, which obviously constrains the outcomes a lot. And for the third, we can really say quite a lot. Probably one of the centre squares is occupied; we can maybe recognize an Italian game or something. We probably don’t see a white pawn on a6 and a black one on h6, for example, although that’s a possible (legal) configuration.

So it goes: an expert golfer’s strike is easier to locate (take a look at the fairway or green) than an amateur (where it could be in water or woods or a foot off the tee). An expert coder’s hello world is easier to guess than someone just starting a course, and so on. Being good at doing stuff makes you more predictable.

And so, if intelligence is something like being good at doing stuff, intelligence makes prediction easier. But if the whole deal of ASI is that it’s super on the I front, then there’s some reason to think it might be easier to predict, and so again we get a way of understanding the doomers’ gigantic confidence.

Accordingly, and contrary to the first half of this post, if you buy these facts about prediction, then perhaps the very qualities that make ASI prima facie hard to predict about, its complicatedness and intelligence, in fact make it easier to predict about.

Here’s how I view it. We have two facts: the fact that we seem to be pretty sucky at predictions like Malthus’s, and the fact that certain slightly unexpected features of a situation, and ones liable to be instantiated in an ASI future, make it more predictable. Both have some force. I don’t know which wins.

--

--