Review of The Internet Is Not What You Think It Is
The book mentioned above is by Justin E. H. Smith, a professor of philosophy in Paris. In addition to a large body of scholarly work mainly on early modern philosophy he’s been branching out into ‘pop’ or ‘public philosophy’, and this, in addition to blogs and podcasts and other such things, is either his second or third book-length work aimed at a general audience (you can read about his many doings here and at his linked website). Its subtitle tells us it’ll give us a history, a philosophy, and a warning, and it does, and often does so simultaneously: it philosophizes by historizing, it warns by philosophizing, and so on.
It’s a very good book that I highly recommend. I’m going to concentrate on the philosophical positions one can extract from the discussion. This is a bit of an amputation: much of the pleasure of the book is in the historical details it brings up, and in the way ideas are thrown out as asides without being developed. But it’s also a sign of the book’s merit: much public philosophy doesn’t succeed in teaching anything new, in changing your mind, and this book does, or at least changed my mind or made me see certain things I took myself to be very familiar with afresh. I’ll talk about Smith’s morality of attention, his views about AI, and his views about the role metaphor plays in shaping both how we understand the world, and, thus, how it can come to shape the world itself. Along the way we’ll consider, glancingly, the methodological question of what sort of role this general audience philosophy can or ought to play in culture in general.
The Morality Of Attention
“The quantity of creative genius [at a time] is directly proportional to the quantity of extreme attention present during that time”, Simone Weil, in Gravity and Grace
It’s by now old news that we inhabit an attention economy. We are given free digital things in exchange for letting their providers show us ads which have to capture our attention to work. A further complexity is owing to the fact the the free digital things are typically partly constituted by our actions done with the thing, and those actions themselves seek attention. We tweet to get attention, for good reasons or bad; we give, willingly, attention to other people’s tweets, and sprinkled in among them are the attention-maximising adverts. Just as the best minds of our generation (so the social media proverb goes) are working out how best to maximise time-devoted-to-ads, so the rest of us are working out how best to get our share of the attention pie. We thus learn, for example, that asking questions on social media drives a lot of engagement, and thereby come to enact ourselves the very same thing the platform makers do. (The platform makers have nothing to say, but the means by which we can say things; the questioner adds nothing, but provides a vehicle, in the form of a quote-retweetable or reply-to-able tweet, that others can use to add ‘content’ in the form of hopefully interesting or amusing answers, which themselves provide a platform, &c., although diminishing returns quickly kick in).
This is well-traversed ground (I myself have traversed it, kind of, at book length). We’re familiar with the economic arguments against the attention economy (presented in e.g. Shoshanna Zuboff’s work; with the political ones (to do with things like polarisation and echo chambers); and the mental health ones (in the work, for example, of Jean Twenge). Philosophers too have gotten into the action and in influential work that covers similar ground to some of Smith’s criticisms, for example, C. Thi Nguyen has described how Twitter gamifies communication.
Thankfully, Smith brings something new, presenting what I take to be a novel and pretty compelling argument concerning the core concept of attention, a concept we’d surely be good to know about if we’re talking about the attention economy (after all, it makes up fully 50% of its name). Having presented an interesting brief overview of some work on attention by philosophers from the Buddhists to contemporary analytics, Smith makes the point that attention has always had a moral component, and uses that to develop a novel argument whose main claims are the following:
- Attention has a moral component, because it involves, fundamentally, an openess to an otherness. He writes:
To properly attend to something…is to relinquish control of the meaning it holds for you, to allow it the potential to become something else, something unfamiliar within the context of your prior range of references
- That openness takes time to cultivate; morally beneficial attention takes time
- Platforms provide and must an ever-changing parade of objects of attention, to none of which can one properly cultivate a deep attentive attitude (their model ‘maximizes solicitations upon a user’s attention and ensures that the attention is never focused in one place for long’)
- So, platforms prevent the cultivation of attention, a moral good
I think this is a fascinating argument. Thinkers as diverse as Simone Weil and David Foster Wallace have, in the latter case (iirc) under slightly different words, appealed to the moral dimension of attention. For Wallace, elite human activity, which includes notably each of recovering from addiction and high-level sports and working boring jobs, involves a state of entire absorption that seems worthy of being called and considered attention (I’m afraid I don’t have references to hand, so this is a source: dude trust me okay sort of thing; incidentally, in writing this I learn there’s a new DFW book with the word ‘attention’ in its title).
For Simone Weil, by contrast, prayer just pure attention and — studying being a ‘gynmastique’ of attention — doing math or Latin homework and the giving oneself over to it has a moral valence equivalent to prayer. For Smith, the example of attention is giving oneself over to a big novel, like reading Proust. He points out that merely skimming and actually engaging with a work are two different activities (in philosophy, maybe, the distinction is between people who try to present gotcha arguments and those who try to help you, by taking on your premises and showing you where they lead, even if that isn’t where you want.) A life lived on platforms is one where your attention muscle atrophies, and the big books, and the moral commitment to the author, yourself, and people close to you, atrophies with it.
So although the examples are different, I’m tempted to say that Smith is here presenting, in his own way, an underexplored facet of the attention economy, one requiring a philosophical (maybe phenomenological) sort of, well, attention, and the recognition of the at best quasi-empirical properties — such as the moral properties — it has, properties the economists and political scientists and psychologists will be wont to overlook. This is a valuable philosophical lesson that is of relevance for the broader intellectual debate about the attention economy, and arguably this is the sort of truth that can could hope from general audience philosophy. (And I’m tempted to think that the fact, as far as I can tell, that neither of our three thinkers influenced each other but converged to a similar view is evidence of its truth.)
(I’m further tempted to wonder whether we can pin down the requisite notion of attention more. Coincidentally, in the week this book arrived I picked up a book about mindfulness and tried one of the exercises, which was to pay attention to one’s breathing. I found it waaay more taxing than I would doing Latin translation, not to mention recovering from addiction or beating Federer at tennis (or, indeed, reading Proust). And I take it the breathing exercises are pretty legit Buddhism, even if often repackaged in a form divorced from that. So I left wondering, granting with Smith that attention is morally valence, whether its hard or easy, breathing or Proust (or vice-versa for some, maybe)).
The Possibility of AI
A central argument of the book concerns the simulation argument presented by Nick Bostrom and discussions about the hard problem of consciousness. Let me briefly explain these for those unfamiliar. The latter is just explaining the weird fact that we have experience at all; that in telling the whole story of our mental life, once you’ve told about beliefs and desires and headaches and hopes you’ve still left out that there’s a certain what-it’s-likeness to conscious experience, the production of which from brain is a hard and mysterious problem.) The simulation argument, roughly, can be seen as attempting to make the case that we are, on balance of probability, computer simulations, something like sims in the titular videogame. The thought is pretty simple: it’s likely at some time in the future we’ll develop genuine artificial intelligence which will involve conscious artificial agents. If we do so, then plausibly we’ll make a lot of them. Then if sim consciousness outnumbers — by a lot — bio-conscious, then we should think it more likely than any given instance of consciousness, including our own, is sim-consciousness.
A way to maybe see this: consider calculations, qua operations, activities somewhen and somewhere, performed on representations of numbers that yield representations of numbers. We can do them and so can a relatively small collection of logic gates taking a small collection of wires bearing or not a current that together represents a number. Although we’ve been performing calculations throughout all history, in the last second more calculations have been performed by gates than by humans (I’m guessing — seems right). So, being told merely of the existence of a calculation, with no further information about when or what (not to mention how) we should conclude, on probabilistic terms, that it was performed by a gate.
Ditto experience as a whole, Bostrom says. Our coding skills will develop to the extent that things other than minds can bear consciousness, and actual real-life brain-goo human consciousness will get swamped by artificial consciousness, so based on our evidence — there’s consciousness, namely ours — we should conclude on probabilistic terms that that consciousness is artificial and not brain goöy (I hope I diacriticed the right character, I’m hoping to get a New Yorker job on the strength of this review).
Smith claims to have an argument against the simulation argument. But I don’t think he really does, and I think this is the biggest philosophical failing of the book. Here’s a key passage, where it at first sounds, he’s teeing up a Putnamian argument. For the argument to work, he says, we need to analogise: future humans qua sims are to putative future creators as current AI systems made in the building down the road are to current biological humans.
He goes on (I have, unscholarly, removed some text without ellipses; I’m pretty confident I’ve omitted nothing relevant and only did so to save me typing):
In order for the analogy to hold, we have to suppose that as our AI gets “better” it may start doubting, willing, sensing etc, rather than simply “running”. But this is an enormous supposition to make, and the defenders of the simulation hypothesis generally make it without argument….Bostrom invites us to suppose “that these simulated people are conscious”, and then adds parenthetically, “as they would be if the simulations were sufficiently fine-grained and if a quite widely accepted position in the philosophy of mind is correct”. But this position’s recent popularity is not itself an argument for it….
Certainly it’s not an argument for it, but Smith doesn’t consider whether there are arguments for it. Indeed, he doesn’t even tell us the view, but the view is (a version of what’s called) functionalism. Roughly, functionalism about mental properties, including consciousness, is the idea that such properties are defined in terms of their function, which is to say how they react (do something, produce an output) in a certain situation (when affected, given an input). The key thought here is that it’s there’s some reason to think that creatures with neural architecture considerably different than ours are capable of instantiating the same mental properties as us. Aliens are paradigmatic examples, but we can also think of octopodes, or perhaps, closer to home and more philosophically redolently, bats.
Why is there some reason? Well, we can imagine it, in the case of aliens. In the case of the octopus, it surely seems like they’re having a laugh or being sneaky when they get up to their tricks; it certainly seems like the guy in that Netflix documentary met the mind of his octopus companion in some way. But once we allow such sophisticated mental properties of jocularity or sneakiness or emotion, is it really so hard to think that consciousness, whatever that may be, is confined to humans?
We can go further. Speaking personally, the relatively recent advances in especially chess have helped convince me that something I want to call real, true, as-good-as-it-gets intelligence, can be found elsewhere (another line I like, offered by philosophers Herman Cappelen and Josh Dever, is just to assume artificial intelligence is minded, then see how far we can get with that assumption; see this (I should point out Cappelen is my boss so I’m a bit biased)). The creativity of AlphaGo, a creativity appreciable even if you’re bad at chess, suggests that to me (see this, maybe — I can’t say I rewatched the video linked, it’s maybe not one of the best. You can google AlphaGo creativity for info.). At a certain point, the denier of real artificial intelligence surely has to stop taking Ls and a safer inference is that, along with creativity, humor, etc. positing qualia for code isn’t such a big deal.
I think this case is instructive. The book would have been strengthened if it named and introduced functionalism. But, counterpoint: would it have been? To really get the view on the table requires recourse to other polysyllabic Latinate words: multiple realizability, supervenience, and so on. Such a book would soon start to get wordy to the detriment of readability, but I’m still tempted to think that more attention to exactly what the simulation person required would have focused attention on what provides evidence for the simulation view, which is, to repeat, respectively functionalism and good evidence that echt-mental properties are realized other than in brains. I’m tempted to say that at this juncture, the concession to readability comes at a cost of also making a concession to the strongest argument that could be made here, and I would love to hear what Smith has to say about AlphaGo’s creativity.
Metaphor
But let’s not end on a negative note. In addition to much else, the book, as its subtitle advertises, gives us a history. But that history is not of DARPA and TCP\IP, JQuery and Android. It’s much more interesting (although the former history is cool too): Smith’s idea is that the internet qua telecommunication — communication at a distance — is part of what it is to be a being on this earth. Presenting, in brief, interesting material about the way whales, trees, slime molds, and so on convey information among themselves, he suggests that the internet is just the latest working out, for humans, of what began more obviously with things like the teles- gram and phone but also, more intriguingly, with asynchronous long-distance communication, either of ideas (with smoke signals and such) or people (with trains but we can also include horses, albeit back in prehistory). Roughly (my roughness, not his) he proposes, fascinatingly, that we think that to be simultaneous is a technology- or society- relative property, such that, to change examples, the 6 hour plane ride from Ireland to New York is simultaneous compared to the weeks long boat trip (and conversely, the milliseconds it takes to transfer an order to a dealer might be an eternity for a high frequency trader trading against someone with nanosecond-transmission-permitting wires).
If that is so, then we’ve always been part of a communication network, and the internet is just a newer instantiation of what previously would have been conducted by mail, horse, steam engine, or even foot. (Note an irony here: a central part of Smith’s book is that communication network is multiply realizable now in the internet, now the telegraph, now in animals’ signalling; it’s slightly odd that he’d be so pro-multiple realizability in this domain and not in another.) As he sums up:
The ecology of the internet, on this line of thinking, is only one more recent layer of the ecology of the planet as a whole, which overlays network upon network…prairie dogs calling out to their kin the exact shape and motions of an arriving predator; sagebushes emitting airborne methyl jasmonate to warn others; blue whales singing songs for their own inscrutable reasons, perhaps simply for the joy of free and directionless discourse of the sort that human beings — now sometimes aided by screens and cables and signals in the ether — call by the name of chatting
But there is, as I see it, another idea, slightly in tension with this universal-historical one. It comes about mainly in the very interesting penultimate chapter about the similarities between the development of proto-computing machines and mechanized weaves. I can’t present the whole story, but in that chapter Smith, and throughout, Smith is much taken with how natural the idea of webs or weavings or tapestries or strings (think of string theory) are as ways of thinking the planet and its life. We speak of the ‘social fabric’, of ‘ties that bind’, and so on, and Smith thinks this important.
This can be good or bad. He suggests, in a passage I can’t immediately find, that the contemporary in vogueness of computational intelligence is a reflection of the fact that it’s computers that (in theory) make the big bucks, are economically and culturally pre-eminent (damn, lost my New Yorker chances), just as previously we thought of humans or societies as machines. We use whatever concepts are around to get a grip on mentality, but those modes of thinking will become as unfashionable as Hobbes’s kind of wind-up toy version of how humans and their societies work (incidentally, in my other, fictional, book, Coming From Nothing, available instantly and freely in a few clicks at all good internet libraries, the protagonist makes the same argument).
In other places, though, metaphors are less deceptive. We could say metaphor is to Smith as the categories are to Kant: necessary features of cognition that go to shape the cognized. In what I take to be a fundamental passage towards the end of the book, he writes:
As Paul Ricoeur has argued, metaphor arises “from the very structures of the mind,” and in this respect is at least as worthy of being taken seriously by philosophers as any literal proposition. This is particularly so when the human mind keeps returning inexorable to the same metaphors…[and] sometimes the structures of the mind are powerful enough to pour out of the mind, and to impose themselves on our built reality as well. Manners of speaking become manners of world-building.
This suggest to me another way of reading Smith’s earlier history of telecommunication. It’s not that whales and brushes and shitposters are all part of some structuredness, some thing multiply realized in these different domains. Rather, this structuredness, this way of being collectively is something like a fundamental part of how we see the world, and it’s because we find it so easily to carry over (the etymology, he reminds us at least once, of ‘metaphor’) talk of webs and fabric from animals to information theory that we come to think of them as one, and maybe — and so this ‘[]other’ way isn’t quite so other — this cognitive universality is imposed on the world when people make the internet, and the webs 1, 2, and 3, and whatever comes next.
I have omitted much. There is a paean to Wikipedia; a page or two on whether we see through the internet the way we see through a window or the way we see through an electron microscope that, especially with the earlier material about the algorithmic shunting we undergo on social media, could be an article of its own; fairly extensive excurses into the philosophy of Leibniz and Kant, and much more, certainly too much for this review.
I’m somewhat wary of a lot of pop philosophy, suspicious that it’s just a domain to show off how much one has read in a way that reviewers for peer reviewed journals don’t let you. But this book is a good remedy to that weariness, managing both to be an entertaining trip through the history of ideas but also capable of showing you the world in a new way, which is after all, plausibly, what much of the humanities is meant to do.
(I write this proximally because a nice person at Princeton UP offered me a copy of the book. As I told her, I probably would have bought it anyway eventually (though probably not written about it). This is to say: if you have books you want to send me, please do! If I like them I’ll write about them, tell my friends, and so on.)