On future people, looking back at 21st century longtermism
(Cross-posted from Hands and Cities)
“Who knows, for all the distance, but I am as good as looking at you now, for all you cannot see me?”
– Whitman, Crossing Brooklyn Ferry
Roughly stated, longtermism is the thesis that what happens in the long-term future is profoundly important; that we in the 21st century are in a position to have a foreseeably positive and long-lasting influence on this future (for example, by lowering the risk of human extinction and other comparable catastrophes); and that doing so should be among the key moral priorities of our time.
This post explores the possibility of considering this thesis — and in particular, a certain kind of “holy sh**” reaction to its basic empirical narrative — from the perspective of future people looking back on the present day. I find a certain way of doing this a helpful intuition pump.
I. Holy sh** the future
“I announce natural persons to arise,
I announce justice triumphant,
I announce uncompromising liberty and equality,
I announce the justification of candor and the justification of pride…O thicker and faster—(So long!)
O crowding too close upon me,
I foresee too much, it means more than I thought…”– Whitman, So Long!
I think of many precise, sober, and action-guiding forms of longtermism — especially forms focused on existential risk in particular — as driven in substantial part by a more basic kind of “holy sh**” reaction, which I’ll characterize as follows:
Holy sh** there could be a lot of sentient life and other important stuff happening in the future.
And it could be so amazing, and shaped by people so much wiser and more capable and more aware than we are.
Wow. That’s so crazy. That’s so much potential.
Wait, so if we mess up and go extinct, or something comparable, all that potential is destroyed? The whole thing is riding on us? On this single fragile planet, with our nukes and bioweapons and Donald Trumps and ~1.5 centuries of experience with serious technology?
Do other choices we make influence how that entire future goes?
This is wild. This is extremely important. This is a crazy time to be alive.
This sort of “holy sh**” reaction responds to an underlying empirical narrative — one in which the potential size and quality of humanity’s future is (a) staggering, and (b) foreseeably at stake in our actions today.
Conservative versions of this narrative appeal to the spans of time that we might live on earth, and the number of people who might live during that time. Thus, if earth will be habitable for hundreds of millions of years, and can support some ten billion humans per century, some 10^16 humans might someday live on earth — ~a million times more than are alive today.
I’m especially interested here, though, in a less conservative version: in which our descendants eventually take to the stars, and spread out across our own galaxy, and perhaps across billions of other galaxies — with billions or even trillions of years to do, build, create, and discover what they see as worth doing, building, creating, and discovering (see Ord (2020), Chapter 8, for discussion).
Sometimes, a lower-bound on the value at stake in this sort of possibility is articulated in terms of human lives (see e.g. Bostrom (2003)). And as I wrote about last week, I think that other things equal, creating wonderful human lives is a deeply worthwhile thing to do. But I also think that talking about the value of the future in terms of such lives should just be seen as a gesture — an attempt to point, using notions of value we’re at least somewhat familiar with, at the possibility of something profoundly good occurring on cosmic scales, but which we are currently in an extremely poor position to understand or anticipate (see the section on “sublime Utopias” here).
Indeed, I think that breezy talk about what future people might do, especially amongst utilitarian-types, often invokes (whether intentionally or no) a vision of a future that is somehow uniform, cold, metallic, voracious, regimented — a vision, for all its posited “goodness” and “optimality” and “efficiency,” that many feel intuitively repelled by (cf. the idea of “tiling” the universe with something, or of something-tronium — computronium, hedonium, etc).
This need not be the vision. Anticipating what future people will actually do is unrealistic, but I think it’s worth remembering that for any particular cosmic future you don’t like, future people can just make a better one. That is, the question isn’t whether some paper-thin, present-day idea of the cosmic future is personally appealing; or whether one goes in, more generally, for the kind of sci-fi aesthetic associated with thinking about space travel, brain emulations, and so forth. The question is whether future people, much wiser than ourselves, would be able to do something profoundly good on cosmic scales, if given the chance. I think they would. Extrapolating from the best that our current world has to offer provides the merest glimpse of what’s ultimately possible. For me, though, it’s more than enough.
But if we consider futures on cosmic scales — and we assume that the universe is not inhabited, at the relevant scales, by other intelligent life (see here for some discussion) — then the numbers at stake quickly get wildly, ridiculously, crazily large. Using lives as a flawed lower-bound metric, for example, Bostrom (2003) estimates that if the Virgo Supercluster contains, say, ten thousand billion stars, and each star can support at least ten billion biological humans, the Virgo Supercluster could support more than 10^23 humans at any one time. If roughly this sort of population could be sustained for, say, a hundred billion years, then at ~100 years per life, this would be some 10^32 human lives. And if we imagine forms of digital sentience instead of biological life, the numbers balloon even more ludicrously: Bostrom (2014, Chapter 6), for example, estimates 10^58 life-equivalents for the accessible universe as a whole. That is, 10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000.
Once we start talking about numbers like this, lots of people bounce off entirely — not just because the numbers are difficult to really grok, or because of the aesthetic reactions just discussed, but because the numbers are so alien and overwhelming that one suspects that any quantitative (and indeed, qualitative) ethical reasoning that takes them as inputs will end up distorted, or totalizing, or inhuman.
I think hesitations of this kind are very reasonable. And importantly, the case for working to improve the long-term future, or to reduce existential risk, need not depend on appeals to astronomical numbers. Indeed, as Ord (2020) discusses, existential risk seems like an important issue from variety of perspectives. Nor need we countenance any sort of totalizing or inhuman response to the possibility of a future on cosmic scales.
But I also don’t think we should ignore or dismiss this possibility, just because the numbers in question are so unthinkably large. To the contrary: I think that the possibility of a future on cosmic scales is a very big deal.
Of course, the possibly immense value at stake in the long-term future is not, in itself, enough to get various practically-relevant forms of longtermism off the ground. Such a future also needs to be adequately large in expectation (e.g., once one accounts for ongoing risk of events like extinction), and it needs to be possible for us to have a foreseeably positive and sufficiently long-lasting influence on it. There are lots of open questions about this, which I won’t attempt to address here.
Rather, following Ord (2020), I’m mostly going to focus on an empirical picture in which the future is very large and positive in expectation, and in which we live during a period of danger to it unprecedented in the ~200,000-year history of our species — a period in which we are starting to develop technologies powerful enough to destroy all our potential, but where we have not yet reached the level of maturity necessary to handle such technologies responsibly (Ord calls this “The Precipice”). And I’ll assume, following Ord, that intentional action on the part of present-day humans can make a meaningful difference to the level of risk at stake.
Granting myself this more detailed empirical picture is granting a lot. Perhaps some will say: “well, obviously if I thought that humanity’s future was immense and amazingly good in expectation, and that there’s a substantive chance it gets permanently destroyed this century, and that we can lower the risk of this in foreseeable and non-trivial ways, I would be on board with longtermism. It’s just that I’m skeptical of those empirical premises for XYZ reason.” And indeed, even if we aren’t actively skeptical for particular, easily-articulable reasons, our intuitive hesitations might encode various forms of empirical uncertainty regardless. (See, e.g., epistemic learned helplessness for an example of where heuristics like this might come from. Basically, the idea is: “arguments, they can convince you of any old thing, just don’t go in for them roughly ever.”)
Skepticism of that kind isn’t the type I’m aiming to respond to here. Rather, the audience I have in mind is someone who looks at this empirical picture, believes it, and says: “meh.” My view is that we should not say “meh.” My view is that if such an empirical picture is even roughly right, some sort of “holy sh**” reaction, in the vein of 1-6 above, is appropriate, and important to remain in contact with — even as one moves cautiously in learning more, and thinking about how to respond practically.
What’s more, I think that imagining this empirical picture from the perspective of the future people in question can help make this sort of reaction intuitively accessible.
II. Holy sh** the past
“I have sung the body and the soul, war and peace have I sung, and the songs of life and death,
And the songs of birth, and shown that there are many births.”– Whitman, So Long!
To get at this, let’s imagine that humans and their descendants do, in fact, go on to spread throughout the stars, and to do profoundly good things on cosmic scales, lasting hundreds of billions of years. Let’s say, for concreteness, that these good things look something like “building complex civilizations filled with wonderful forms of conscious life” — though this sort of image may well mislead.
And let’s imagine, too, that looking back, our descendants can see that there were in fact serious existential risks back in the 21st century — risks that irresponsible humans could exacerbate, and responsible humans foreseeably reduce; and that had humanity succumbed to such a risk, no other species, from earth or elsewhere, would ever have built a future of remotely comparable scale or value. What would these descendants think of the 21st century?
When I imagine this, I imagine them having a “holy sh**” reaction akin to the one I think of 21st-century longtermists as having. That is, I imagine them looking backwards through the aeons, and seeing the immensity of life and value and consciousness throughout the cosmos rewind and un-bloom, shrinking, across breathtaking spans of space and time, to an almost infinitesimal point — a single planet, a fleck of dust, where it all started. What Yudkowsky (2015) calls “ancient earth.”
Sometimes I imagine this as akin to playing backwards the time-lapse growth of an enormous tree, twisting and branching through time and space on cosmic scales — a tree whose leaves fill the firmament with something lush and vast and shining; a tree billions of years old, yet strong and intensely alive; a tree which grew, entirely, from one tiny, fragile seed.
And I imagine them zooming in on that seed, back to the very early history of the species that brought the cosmos to life, to the period just after their industrial revolution, when their science and technology really started to take off. A time of deep ignorance and folly and suffering, and a time, as well, of extreme danger to the entire future; but also a time in which life began to improve dramatically, and people began to see more clearly what was possible.
What would they think? Here I think of Carl Sagan’s words:
“They will marvel at how vulnerable the repository of all our potential once was, how perilous our infancy, how humble our beginnings, how many rivers we had to cross, before we found our way.”
Or, more informally, I imagine them going: “Whoa. Basically all of history, the whole thing, all of everything, almost didn’t happen.” I imagine them thinking about everything they see around them, and everything they know to have happened, across billions of years and galaxies — things somewhat akin, perhaps, to discoveries, adventures, love affairs, friendships, communities, dances, bonfires, ecstasies, epiphanies, beginnings, renewals. They think about the weight of things akin, perhaps, to history books, memorials, funerals, songs. They think of everything they love, and know; everything they and their ancestors have felt and seen and been a part of; everything they hope for from the rest of the future, until the stars burn out, until the story truly ends.
All of it started there, on earth. All of it was at stake in the mess and immaturity and pain and myopia of the 21st century. That tiny set of some ten billion humans held the whole thing in their hands. And they barely noticed.
III. Shared reality
“What is it then between us?
What is the count of the scores or hundreds of years between us?”– Whitman, Crossing Brooklyn Ferry
There is a certain type of feeling one can get from engaging with someone from the past, who is writing about — or indeed, writing to — people in the future like yourself, in a manner that reflects a basic picture of things that you, too, share. I’ll call this feeling “shared reality” (apparently there is some sort of psychological literature that uses this term, and it’s used in practices like Circling as well, but I don’t necessarily have the meaning it has in those contexts in mind here).
I get this feeling a bit, for example, when I read this quote from Seneca, writing almost 2,000 years ago (quote from Ord (2020), Chapter 2):
“The time will come when diligent research over long periods will bring to light things which now lie hidden. A single lifetime, even though entirely devoted to the sky, would not be enough for the investigation of so vast a subject… And so this knowledge will be unfolded only through long successive ages.”
Reading this, I feel a bit like saying to Seneca: “Yep. You got the basic picture right.” That is, it seems to me like Seneca had his eye on the ball — at least in this case. He knew how much he didn’t know. He knew how much lay ahead.
I feel something similar, though less epistemic, and more interpersonal, with Whitman, who writes constantly about, and to, future people (thanks to Caroline Carlsmith for discussion and poem suggestions, and for inspiring this example; see also her work in response to Whitman, here). See, e.g., here:
“Full of life, sweet-blooded, compact, visible,
I forty years old the Eighty-third Year of The States,
To one a century hence, or any number of centuries hence,
To you, yet unborn, these, seeking you.When you read these, I, that I was visible, am become invisible;
Now it is you, compact, viable, realizing my poems, seeking me,
Fancying how happy you were, if I could be with you, and become your lover;
Be it as if I were with you. Be not too certain but I am now with you.”
And here:
“Others will enter the gates of the ferry and cross from shore to shore,
Others will watch the run of the flood-tide,
Others will see the shipping of Manhattan north and west, and the heights of Brooklyn to the south and east,
Others will see the islands large and small;
Fifty years hence, others will see them as they cross, the sun half an hour high,
A hundred years hence, or ever so many hundred years hence, others will see them,
Will enjoy the sunset, the pouring-in of the flood-tide, the falling-back to the sea of the ebb-tide…It avails not, time nor place—distance avails not,
I am with you, you men and women of a generation, or ever so many generations hence,
Just as you feel when you look on the river and sky, so I felt,
Just as any of you is one of a living crowd, I was one of a crowd,
Just as you are refresh’d by the gladness of the river and the bright flow, I was refresh’d,
Just as you stand and lean on the rail, yet hurry with the swift current, I stood yet was hurried…What thought you have of me now, I had as much of you—I laid in my stores in advance,
I consider’d long and seriously of you before you were born.”
That is, it feels like Whitman is living, and writing, with future people — including, in some sense, myself — very directly in mind. He’s saying to his readers: I was alive. You too are alive. We are alive together, with mere time as the distance. I am speaking to you. You are listening to me. I am looking at you. You are looking at me.
If the basic longtermist empirical narrative sketched above is correct, and our descendants go on to do profoundly good things on cosmic scales, I have some hope they might feel something like this sense of “shared reality” with longtermists in the centuries following the industrial revolution — as well as with many others, in different ways, throughout human history, who looked to the entire future, and thought of what might be possible.
In particular, I imagine our descendants looking back at those few centuries, and seeing some set of humans, amidst much else calling for attention, lifting their gaze, crunching a few numbers, and recognizing the outlines of something truly strange and extraordinary — that somehow, they live at the very beginning, in the most ancient past; that something immense and incomprehensible and profoundly important is possible, and just starting, and in need of protection.
I imagine our descendants saying: “Yes. You can see it. Don’t look away. Don’t forget. Don’t mess up. The pieces are all there. Go slow. Be careful. It’s really possible.” I imagine them looking back through time at their distant ancestors, and seeing some of those ancestors, looking forward through time, at them. I imagine eyes meeting.
IV. Narratives and mistakes
It appears to me I am dying.
Hasten throat and sound your last,
Salute me—salute the days once more. Peal the old cry once more.– Whitman, So Long!
To be clear: this is some mix between thought experiment and fantasy. It’s not a forecast, or an argument. In particular, the empirical picture I assumed above may just be wrong in various key ways. And even if it isn’t, future people need not think in our terms, or share our narratives. What’s salient to them may be entirely different from what’s salient to us. And regardless of the sympathy they feel towards post-industrial revolution longtermists, they will be in a position to see, too, our follies and mistakes, our biases and failures; what, in all of it, was just a game, something social, fanciful, self-serving — but never, really, real.
Indeed, even if longtermists are right about the big picture, and act reasonably in expectation, much if not all of what we try to do in service of the future will be wasted effort — attempts, for example, to avert catastrophes that were never going to happen, via plans that were never going to succeed. Future people would see this, too. And they would see the costs. They’d see what was left undone — what mistakes, and waste, and bad luck meant. And they would see everything else that mattered about our time, too.
Indeed, in bad cases, they might see our grand hopes for the future as naive, sad, silly, tragic — a product of a time before it all went so much more deeply wrong, when hope was still possible. Or they won’t exist to see us at all.
I’m mostly offering this image of future people looking back as a way of restating the “holy sh**” reaction I described in section I, through a certain lens. I’m not sure if it will land for anyone who didn’t have that reaction in the first place. But I find that it makes a difference for me.
If long-termism is correct, and if some people alive today live to enjoy the benefits of radical life extension, then it’s always seemed to me that the lived experiences of long-termist altruists would have significant historical value. That is, an advanced future society may well have the ability to extract and experience memories from said individuals (with, I hope, their consent). The closest fictional illustration of this would be: sharing memories via a pensieve.
And I do think these two premises are plausible, and so I have occasionally felt not only the eyes of the future on our era—and perhaps on myself—but I have also imagined and felt the eyes of the future looking out through my own.
But this kind of many-worldeaters thinking is already obsolete. It won’t be that it “almost” didn’t happen; it’s that it mostly didn’t happen. (The future will have the knowledge and compute to say what the distribution of outcomes was for a specified equivalence class of Earth-analogues across the multiverse.)
One negative about this is that everything above appears to be correct. It is difficult to construct, from currently observed facts, a plausible future that won’t inevitably result in intelligent beings ‘tiling’ the universe in some way. (not necessarily a bad thing, and with such big numbers if humans become extinct some other species should do the same thing) Yet we look through telescopes and this has not yet happened. This is unusual and implies that the rules may not be what they appear. (note that modern physics has had an incredibly brief existence, within that of a single human lifespan. That is, someone could have been born when the modern understanding of physics and astronomy had not yet been discovered (1964 ?) and still be alive now.)
Also the pointlessness of death—the future you describe is stupidly grim because if humans are still limited to 100 years, from each person’s individual perspective the entire universe ends when they die, and therefore this kind of unbelievably amazing accomplishment means nothing to them personally.