We need a name for the “effective altruists” or “extreme altruists” who specifically care about the cosmic future that allegedly potentially depends on events on Earth. Or even just for the field of studies which concerns itself with how to act in such a situation. “Astronomical altruism” and “astronomical ethics” suggest themselves… And I would be more impressed with such astronomical altruists, and their earthbound cousins the effective altruists, if they showed more awareness of the network of catastrophe and disappointment that is so much a part of human life to date.
The astronomical altruists are a minority within a minority, and I suppose I see two camps here. One group thinks in terms of FAI and the contingency of value systems, so the possible futures are conceived as: extinction, a civilization with human-friendly values replicated across the galaxies, a civilization with warped values replicated across the galaxies, paperclips… and so there is the idea that a cosmically bad outcome is possible, not just because of the “astronomical waste” of a future universe that could have been filled with happy people but instead ends up uninhabited, but because the blueprint of the cosmic civilization was flawed at its inception—producing something that is either just alien to human sensibilities, even “renormalized” ones, because it forgot some essential imperatives or introduced others; or (the worst nightmare) producing something that looks actually evil and hostile to human values, and replicating that across millions of light-years.
I was going to say that the other camp just hopes for an idyllic human life copied unto infinity, and concerns itself neither with contingency of value, nor with the possibility that a trillionfold duplication of Earth humanity will lead to a trillionfold magnification of the tragedies already known from our history. Some extreme advocates of space colonization might fit this description, but of course there are other visions out there—a crowded galaxy of upload-descended AIs, selected for their enthusiasm for replication, happily living in subsistence conditions (i.e. with very tight resource budgets); or poetic rhapsodies about an incomprehensibly diverse world of robot forms and AIs of astronomical size, remaking the cosmos into one big Internet…
So perhaps it’s more accurate to say that there are two subtypes of astronomical altruism which are a little unreflective about the great future that could happen, the great future for the sake of which we must fight those various threats of extinction grouped under “existential risk”. There is a humanist vision, which supposes that the great future consists of human happiness replicated across the stars. It imagines an idyll that has never existed on Earth, but which has certainly been imagined many times over by human utopians seeking a way beyond the grim dour world of history; the novelty is that this idyll is then imagined as instantiated repeatedly across cosmic spaces. And there is a transhumanist vision, basically science-fictional, of inconceivable splendors, endless strange worlds and strange modes of being, the product of an imagination stirred by the recent centuries of intellectual and technological progress.
Now here is something curious. If we keep looking for other views that have been expressed, we will occasionally run across people who are aware that cosmically extended civilization means the possibility or even likelihood of cosmically extended tragedy and catastrophe. And some of these people will say that, nonetheless, it is still worth affirming the drive to spread across the universe: this prospect is so grand that it would redeem even astronomically sized tragedy. I cannot think of any prominent public “tragic cosmists” in the present, who have the fanaticism of the astronomical altruists but whose sensibility is tragic affirmation of life, but I’m sure such views are held privately by a few people.
In any case, you would think that utilitarians concerned about “astronomical waste” would also be concerned about the possibility of “astronomical tragedy”. And perhaps they could perform their utilitarian calculation, which normally turns up the result that the good outweighs the bad and therefore we should go for it. But this whole aspect seems very underplayed in discussions e.g. of existential risk. There might be a good future, or a bad future, or no future; but who ever talks of a good future riddled with bad, or a bad future with islands of good?
There seems to be a mindset according to which actions here and now (I mean 21st century Earth) set the tone for everything that follows. We need to make the effort to produce a good future, but once it is achieved and set in motion, then we can relax, and we or our descendants will just reap the rewards. Perhaps the FAI camp has some justification for thinking like this, since they envision the rise of a hyperintelligence of overwhelming power, with the capacity to make its preferences law within its expanding sphere of influence…
But otherwise, this idea that this is the Now that matters the most, reflects a sort of optimism of the will, an optimism about one’s place in the scheme of things and one’s capacity to make a difference in a big way. Some advocates of space colonization say that it’s about not having all our eggs in one basket; so there might be some justification there, for thinking this is a special moment—this is indeed the time when it has first become possible for humans to live beyond Earth. If you’re worried about whole-Earth vulnerabilities, then this is our first chance to simply place people beyond their reach. Take that, earthbound existential risks!
From this perspective, what I don’t see discussed is (1) the fact that hazard persists even beyond Earth (2) the fact that saving the human race from destruction also means perpetuating its suffering, evil, and folly. Of course it’s very difficult to get your mind around the full spectrum of possibilities, when they include wars between ideologies that don’t even exist yet, or the catastrophic decline and fall of vast projects not yet imagined. But I think that much of the ethical anxiety about the imperative to keep the possibility of a big future alive, has not assimilated the lessons of the earthbound present; that it’s based in either a desire to protect the happiness of oneself and one’s friends from a threatening world, or an affirmation of will and power, which hasn’t accepted the lesson of life and history, that things do fall apart or get torn apart, that life also includes frustration, desolation, and boredom.
I do not know whether any form of cosmic hope is warranted, but I especially doubt that cosmic hope that is pursued in a state of blindness or denial, will nonetheless be fulfilled.
I had a conversation about argument mapping software with Katja Grace and Paul Christiano at the weekend, and this comment reinforces my conclusion that really good argument mapping software would be a very high value thing to have. I want to map out the tree of arguments underlying Beckstead’s thesis, so that I can ask you to identify a particular node you disagree with, and set out a counterargument that he hasn’t already presented. It would be a lot easier to tell whether there is value in what you say that way.
However, in the absence of that, a paragraph saying “On page X he asserts Y, but Z” would help a lot.
Personally, it’s not actually about saving the future. It’s about justifying the past. (Not necessarily endorsing this as an ethical thesis. Reporting it as an attempted stack trace on my actual emotional reasons for supporting the good-future project.)
There might be a good future, or a bad future, or no future; but who ever talks of a good future riddled with bad, or a bad future with islands of good?
Given an FAI singleton or uFAI singleton, islands are improbable. A Malthusian future full of ems, however, seems like a possible fit to your model. So expectations about how intelligence and power will coalesce or diversify are crucial.
But otherwise, this idea that this is the Now that matters the most, reflects a sort of optimism of the will
I think it just reflects a straight-line prediction. Every previous “Now” (as said in the past) was crucial; why wouldn’t this one be? I’m assuming that history is pretty chaotic. Small disturbances in the past would lead to vast ones in the present.
How about we call such people “absurd altruists”, since abstracting one’s present interests to such cosmic scales is surely absurd. All these folks are doing is trying to construct some kind of scientific religion, to give themselves “cosmic hope” where none is warranted and to put themselves in the position of universal saviors. I used to do the same thing myself, until I deconstructed it a bit and realized that even superintelligences and intergalactic civilizations change nothing fundamentally. This is why I now advocate a form of nihilism, or what I like to call “Zen-Cosmicism”, which is a spiritual acceptance of the absurdity and material futility of our existence, without drawing any debilitating moral conclusions therefrom. The universe is what it is, but it’s not my problem and it can’t be fixed, so I’m not going to get neurotic or fanatical about fixing it.
This is why I now advocate a form of nihilism, or what I like to call “Zen-Cosmicism”, which is a spiritual acceptance of the absurdity and material futility of our existence, without drawing any debilitating moral conclusions therefrom. The universe is what it is, but it’s not my problem and it can’t be fixed, so I’m not going to get neurotic or fanatical about fixing it.
The second sentence is a debilitating moral conclusion drawn from the first.
We need a name for the “effective altruists” or “extreme altruists” who specifically care about the cosmic future that allegedly potentially depends on events on Earth. Or even just for the field of studies which concerns itself with how to act in such a situation. “Astronomical altruism” and “astronomical ethics” suggest themselves… And I would be more impressed with such astronomical altruists, and their earthbound cousins the effective altruists, if they showed more awareness of the network of catastrophe and disappointment that is so much a part of human life to date.
The astronomical altruists are a minority within a minority, and I suppose I see two camps here. One group thinks in terms of FAI and the contingency of value systems, so the possible futures are conceived as: extinction, a civilization with human-friendly values replicated across the galaxies, a civilization with warped values replicated across the galaxies, paperclips… and so there is the idea that a cosmically bad outcome is possible, not just because of the “astronomical waste” of a future universe that could have been filled with happy people but instead ends up uninhabited, but because the blueprint of the cosmic civilization was flawed at its inception—producing something that is either just alien to human sensibilities, even “renormalized” ones, because it forgot some essential imperatives or introduced others; or (the worst nightmare) producing something that looks actually evil and hostile to human values, and replicating that across millions of light-years.
I was going to say that the other camp just hopes for an idyllic human life copied unto infinity, and concerns itself neither with contingency of value, nor with the possibility that a trillionfold duplication of Earth humanity will lead to a trillionfold magnification of the tragedies already known from our history. Some extreme advocates of space colonization might fit this description, but of course there are other visions out there—a crowded galaxy of upload-descended AIs, selected for their enthusiasm for replication, happily living in subsistence conditions (i.e. with very tight resource budgets); or poetic rhapsodies about an incomprehensibly diverse world of robot forms and AIs of astronomical size, remaking the cosmos into one big Internet…
So perhaps it’s more accurate to say that there are two subtypes of astronomical altruism which are a little unreflective about the great future that could happen, the great future for the sake of which we must fight those various threats of extinction grouped under “existential risk”. There is a humanist vision, which supposes that the great future consists of human happiness replicated across the stars. It imagines an idyll that has never existed on Earth, but which has certainly been imagined many times over by human utopians seeking a way beyond the grim dour world of history; the novelty is that this idyll is then imagined as instantiated repeatedly across cosmic spaces. And there is a transhumanist vision, basically science-fictional, of inconceivable splendors, endless strange worlds and strange modes of being, the product of an imagination stirred by the recent centuries of intellectual and technological progress.
Now here is something curious. If we keep looking for other views that have been expressed, we will occasionally run across people who are aware that cosmically extended civilization means the possibility or even likelihood of cosmically extended tragedy and catastrophe. And some of these people will say that, nonetheless, it is still worth affirming the drive to spread across the universe: this prospect is so grand that it would redeem even astronomically sized tragedy. I cannot think of any prominent public “tragic cosmists” in the present, who have the fanaticism of the astronomical altruists but whose sensibility is tragic affirmation of life, but I’m sure such views are held privately by a few people.
In any case, you would think that utilitarians concerned about “astronomical waste” would also be concerned about the possibility of “astronomical tragedy”. And perhaps they could perform their utilitarian calculation, which normally turns up the result that the good outweighs the bad and therefore we should go for it. But this whole aspect seems very underplayed in discussions e.g. of existential risk. There might be a good future, or a bad future, or no future; but who ever talks of a good future riddled with bad, or a bad future with islands of good?
There seems to be a mindset according to which actions here and now (I mean 21st century Earth) set the tone for everything that follows. We need to make the effort to produce a good future, but once it is achieved and set in motion, then we can relax, and we or our descendants will just reap the rewards. Perhaps the FAI camp has some justification for thinking like this, since they envision the rise of a hyperintelligence of overwhelming power, with the capacity to make its preferences law within its expanding sphere of influence…
But otherwise, this idea that this is the Now that matters the most, reflects a sort of optimism of the will, an optimism about one’s place in the scheme of things and one’s capacity to make a difference in a big way. Some advocates of space colonization say that it’s about not having all our eggs in one basket; so there might be some justification there, for thinking this is a special moment—this is indeed the time when it has first become possible for humans to live beyond Earth. If you’re worried about whole-Earth vulnerabilities, then this is our first chance to simply place people beyond their reach. Take that, earthbound existential risks!
From this perspective, what I don’t see discussed is (1) the fact that hazard persists even beyond Earth (2) the fact that saving the human race from destruction also means perpetuating its suffering, evil, and folly. Of course it’s very difficult to get your mind around the full spectrum of possibilities, when they include wars between ideologies that don’t even exist yet, or the catastrophic decline and fall of vast projects not yet imagined. But I think that much of the ethical anxiety about the imperative to keep the possibility of a big future alive, has not assimilated the lessons of the earthbound present; that it’s based in either a desire to protect the happiness of oneself and one’s friends from a threatening world, or an affirmation of will and power, which hasn’t accepted the lesson of life and history, that things do fall apart or get torn apart, that life also includes frustration, desolation, and boredom.
I do not know whether any form of cosmic hope is warranted, but I especially doubt that cosmic hope that is pursued in a state of blindness or denial, will nonetheless be fulfilled.
I had a conversation about argument mapping software with Katja Grace and Paul Christiano at the weekend, and this comment reinforces my conclusion that really good argument mapping software would be a very high value thing to have. I want to map out the tree of arguments underlying Beckstead’s thesis, so that I can ask you to identify a particular node you disagree with, and set out a counterargument that he hasn’t already presented. It would be a lot easier to tell whether there is value in what you say that way.
However, in the absence of that, a paragraph saying “On page X he asserts Y, but Z” would help a lot.
Does this thesis say something beyond, “If life is good, and if we have a chance to create lots of life, then we should go for it?”
Personally, it’s not actually about saving the future. It’s about justifying the past.
(Not necessarily endorsing this as an ethical thesis. Reporting it as an attempted stack trace on my actual emotional reasons for supporting the good-future project.)
Given an FAI singleton or uFAI singleton, islands are improbable. A Malthusian future full of ems, however, seems like a possible fit to your model. So expectations about how intelligence and power will coalesce or diversify are crucial.
I think it just reflects a straight-line prediction. Every previous “Now” (as said in the past) was crucial; why wouldn’t this one be? I’m assuming that history is pretty chaotic. Small disturbances in the past would lead to vast ones in the present.
How about we call such people “absurd altruists”, since abstracting one’s present interests to such cosmic scales is surely absurd. All these folks are doing is trying to construct some kind of scientific religion, to give themselves “cosmic hope” where none is warranted and to put themselves in the position of universal saviors. I used to do the same thing myself, until I deconstructed it a bit and realized that even superintelligences and intergalactic civilizations change nothing fundamentally. This is why I now advocate a form of nihilism, or what I like to call “Zen-Cosmicism”, which is a spiritual acceptance of the absurdity and material futility of our existence, without drawing any debilitating moral conclusions therefrom. The universe is what it is, but it’s not my problem and it can’t be fixed, so I’m not going to get neurotic or fanatical about fixing it.
The second sentence is a debilitating moral conclusion drawn from the first.