Eh, I think this is really splitting hairs. I have seen already multiple people using the lack of reference to climate change to dismiss the whole thing. Not every system of values places extinction on its own special pedestal (though I think in this case “biological omnicide” might be more it: unlike pandemics, AI could also kill the rest of non-human life). But in terms of expected loss of life AI could be even with those other things if you consider them more likely.
Not every system of values places extinction on its own special pedestal [...] in terms of expected loss of life AI could be even with those other things
Well this is wrong, and I’m not feeling any sympathy for a view that it’s not. An eternity of posthuman growth after recovering from a civilization-spanning catastrophe really is much better than lights out, for everyone, forever.
I agree that there are a lot of people who don’t see this, and will dismiss a claim that expresses this kind of thing clearly. In mainstream comments to the statement, I’ve seen frequent claims that this is about controlling the narrative and ensuring regulatory lock-in for the big players. From the worldview where AI x-risk is undoubtedly pure fiction, the statement sounds like Very Serious People expressing Concern for the Children. Whereas if object level claims were to be stated more plainly, this interpretation would crumble, and the same worldview would be forced to admit that the people signing the claim are either insane, or have a reason for saying these things that is not Controlling the Narrative. It’s the same thing as with AI NotKillEveryoneism vs. AI Safety.
Well this is wrong, and I’m not feeling any sympathy for a view that it’s not. An eternity of posthuman growth after recovering from a civilization-spanning catastrophe really is much better than lights out, for everyone, forever.
You can’t really say anything is objectively wrong when it comes to morals, but also, I generally think that evaluating the well-being of potential entities to be leads to completely nonsensical moral imperatives like the Repugnant Conclusion. Since no one experiences all of the utility at the same time, I think “expected utility probability distribution” is a much more sensible metric (as in, suppose you were born as a random sentient in a given time and place: would you be willing to take the bet?).
That said, I do think extinction is worse than just a lot of death, but that’s as a function of the people who are about to witness it and know they are the last. In addition, I think omnicide is worse than human extinction alone because I think animals and the rest of life have moral worth too. But I wouldn’t blame people for simply considering extinction as 8 billion deaths, which is still A LOT of deaths anyway. It’s a small point that’s worthless arguing. We have wide enough uncertainties on the probability of these risks anyway that we can’t really put fixed numbers to the expect harms, just vague orders of magnitude. While we may describe them as if they were numerical formulas, these evaluations really are mostly qualitative; enough uncertainty makes numbers almost pointless. Suffice to say, I think if someone considers, say, a 5% chance of nuclear war a bigger worry than a 1% chance of AI catastrophe, then I don’t think I can make a strong argument for them being dead wrong.
In mainstream comments to the statement, I’ve seen frequent claims that this is about controlling the narrative and ensuring regulatory lock-in for the big players. From the worldview where AI x-risk is undoubtedly pure fiction, the statement sounds like Very Serious People expressing Concern for the Children.
I agree this makes no sense, but it’s a completely different issue. That said, I think the biggest uncertainty re: X-risk remains whether AGI is really as close as some estimate it is at all. But this aspect is IMO irrelevant when judging the opportunity of actively trying to build AGI. Either it’s possible, and then it’s dangerous, or it’s still way far off, and then it’s a waste of money and precious resources and ingenuity.
It’s not that complicated. There is a sense in which these claims are objective (even as the words we use to make them are 2-place words), to the same extent as factual claims, both are seen through my own mind and reified as platonic models. Though morality is an entity that wouldn’t be channeled in the physical world without people, it actually is channeled, the same as the Moon actually is occasionally visible in the sky.
as a function of the people who are about to witness it and know they are the last
My point is not about anyone’s near term subjective experience, but about what actually happens in the distant future.
But this aspect is IMO irrelevant when judging the opportunity of actively trying to build AGI. Either it’s possible, and then it’s dangerous, or it’s still way far off, and then it’s a waste of money and precious resources and ingenuity.
It’s their resources and ingenuity. If there is no risk, it’s not our business to tell them not to waste them.
My point is not about anyone’s near term subjective experience, but about what actually happens in the distant future.
I really, really don’t care about what happens in the distant future compared to what happens now, to humans that actually exist and feel. I especially don’t care about there being an arbitrarily high amount of humans. I don’t think a trillion humans is any better than a million as long as:
they are happy
whatever trajectory lead to those numbers didn’t include any violent or otherwise painful mass death event, or other torturous state.
There really is nothing objective about total sum utilitarianism; and in fact, as far as moral intuitions go, it’s not what most people follow at all. With things like “actually death is bad” you can make a very cogent case: people, day to day, usually don’t want to die, therefore there never is a “right moment” in which death is not a violation; if there was, people can still commit suicide anyway, thus death by old age or whatever else is just bad. That’s a case where you can invoke the “it’s not that complicate” argument IMO. Total sum utilitarianism is not; I find it a fairly absurd ethical system, ripe for exploits so ridiculous and consequences so blatantly repugnant that it really isn’t very useful at all.
I agree, amount of humans and a lot of other utilitarian aims is goodharting for bad proxies. The distinction I was gesturing at is not about amount of what happens, but about perception vs. reality. And a million humans is very different from zero anyone, even if the end was not anticipated nor perceived.
humanity goes extinct gradually and voluntarily via a last generation that simply doesn’t want to reproduce and is cared for by robots to the end, so no one suffers particularly in the process;
humanity is locked in a torturous future of trillions in inescapable torture, until the heat death of the universe.
Which is better? I would say 1 is. There are things worse than extinction (and some of them are on the table with AI too, theoretically). And anyway you should consider that with how many “low hanging fruit” resources we’ve used, there’s fair odds that if we’re knocked back into the millions by a pandemic or nuclear war now we may never pick ourselves back again. Stasis is better than immediate extinction but if you care about the long term future it’s also bad (and implies a lot more suffering because it’s a return to the past).
As a normie, I would say 1 is. Depending on how some people see things, 2 is the past—which I disagree with and at any rate would say was the generator of an immense quantity of joy, love and courage along with the opposite qualities of pain, mourning and so on.
So for me, I would indeed say that my morality puts extinction on a higher pedestal than anything else(and also am fully am against mind uploading or leaving humans with nothing to do).
I mean, 2 is not the past in a purely numerical sense (I wouldn’t say we ever hit trillions of total humans). But the problem is also on the inescapable part, which assumes e.g. permanent disempowerment. That’s not a feature of the past.
I’m not sure which your “I would say 1” meant—I asked which was better, but then you said you think extinction is its own special thing. Anyway I don’t disagree that extinction is a special kind of bad, but it is IMO still in relation to people living today. I’d rather not die, but if I have to die, I’d rather die knowing the world goes on and the things I did still have purpose for someone. Extinction puts an end to that. I want to root that morality in the feelings of present people because I feel like assigning moral worth to not-yet-existing ones completely breaks any theory. For example, however many actual people exist, there’s always an infinity of potential people that don’t. In addition, it allows for justifying making existing people suffer for the sake of creating more people later (e.g. intensive factory farming of humans until we reach population 1 trillion ASAP, or however many is needed to justify the suffering inflicted via created utility), which is just absurd.
I would just say as a normie, that these extensive thought experiments of factory humans mostly don’t concern themselves to me - though I could see a lot of justification of suffering to allow humanity to exist for say, another 200 billion years. People have always suffered in some extent to do anything; and certainly having children entails some trade-offs, but existence itself is worth it.
But mostly the idea of a future without humanity, or even one without our biology, just strikes me with such abject horror that it can’t be countenanced.
I have children myself and I do wonder if this is a major difference. To imagine a world where they have no purpose drives me quite aghast and I feel this would reflect the thinking of the majority of humans.
And as such, hopefully drive policy which will, in my best futures, drive humanity forward. I see a good end as humanity spreading out into the stars and becoming inexhaustible, perhaps turning into multiple different species but ultimately, still with the struggles, suffering and triumphs of who we are.
I’ve seen arguments here and there about how the values drift from say, a hunter gatherer to us would horrify us, but I don’t see that. I see a hunter-gatherer and relate to him on a basic level. He wants food, he will compete for a mate and one day, die and his family will seek comfort from each other. My work will be different from his but I comprehend him, and as writings like Still A Pygmy show, they comprehend us.
The descriptions of things like mind uploading of accepting the extinction of humanity strike me with such wildness that it’s akin to a vast, terrifying revulsion. It’s Lovecraftian horror and I think, very far from any moral goodness to inflict upon the majority of humanity.
My point isn’t that extinction is a-ok, but rather that you could “price it” as the total sum of all human deaths (which is the lower bound, really) and there would still be a case for that. It still remains very much to avoid! I think it’s worse than that but I also don’t think it’s worse than anything. If the choice was between going extinct now or condemning future generations to lives of torture, I’d pick extinction as the lesser evil. And conversely I am also very sceptical of extremely long term reasoning, especially if used to justify present suffering. You bring up children but those are still very much real and present. You wouldn’t want them to suffer for the sake of hypothetical 40th century humans, I assume.
Depends on the degree of suffering to be totally honest- obviously I’m fine with them suffering to some extent, which is why we drive then to behave, etc so they can have better futures and sometimes conjoin them to have children so that we can continue the family line.
I think my answer actually is yes, if hypothetically their suffering allows the existence of 40th century humans, it’s pretty noble and yes, I’d be fine with it.
if hypothetically their suffering allows the existence of 40th century humans, it’s pretty noble and yes, I’d be fine with it
So supposing everything goes all right, for every more human born today there might be millions of descendants in the far future. Does that mean we have a moral duty to procreate as much as possible? I mean, the increased stress or financial toll surely don’t hold a candle to the increased future utility experienced by so many more humans!
To me it seems this sort of reasoning is bunk. Extinction is an extreme of course but every generation must look first and foremost after the people under its own direct care, and their values and interests. Potential future humans are for now just that, potential. They make no sense as moral subjects of any kind. I think this extends to extinction, which is only worse than the cumulative death of all humans insofar as current humans wish for there to be a future. Not because of the opportunity cost of how non-existing humans will not get to experience non-existing pleasures.
I apologize for being a normie but I can’t accept anything that involves non-existence of humanity and would indeed accept an enormous amount of suffering if those were the options.
there’s fair odds that if we’re knocked back into the millions by a pandemic or nuclear war now we may never pick ourselves back again
Humanity went from Göbekli Tepe to today in 11K years. I doubt even after forgetting all modern learning, it would take even a million years to generate knowledge and technologies for new circumstances. I hear the biosphere can last about a billion years more. (One specific path is to use low-tech animal husbandry to produce smarter humans. This might even solve AI x-risk by making humanity saner.)
I disagree it’s that easy. It’s not a long trajectory of inevitability; like with evolution, there are constraints. Each step generally has to be on its own aligned with economic incentives at the time. See how for example steam power was first developed to fuel pumps removing water from coal mines; the engines were so inefficient that it was only cost effective if you didn’t also need to transport the coal. Now we’ve used up all surface coal and oil, not to mention screwed up the climate quite a bit for the next few millennia, conditions are different. I think technology is less uniform progression and more a mix of “easy” and “hard” events (as in the grabby aliens paper, if you’ve read it), and by exhausting those resources we’ve made things harder. I don’t think climbing back up would be guaranteed.
(One specific path is to use low-tech animal husbandry to produce smarter humans. This might even solve AI x-risk by making humanity saner.)
This IMO even if it was possible would solve nothing while potentually causing an inordinate amount of suffering. And it’s also one of those super long term investments that don’t align with almost any incentive on the short term. I say it solves nothing because intelligence wouldn’t be the bottleneck; if they had any books left lying around they’d have a road map to tech, and I really don’t think we’ve missed some obvious low tech trick that would be relevant to them. The problem is having the materials to do those things and having immediate returns.
Intelligence is also a thing that enables perceiving returns that are not immediate, as well as maintenance of more complicated institutions that align current incentives towards long term goals.
This isn’t a simple marshmallow challenge scenario. If you have a society that has needs and limited resources, it’s not inherently “smart” to sacrifice those significantly for the sake of a long term project that might e.g. not benefit anyone who’s currently living. It’s a difference in values at that point; even if you’re smart enough you can still not believe it right.
For example, suppose in 1860 everyone knew and accepted global warming as a risk. Should they, or would they, have stopped using coal and natural gas in order to save us this problem? Even if it meant lesser living standards for themselves, and possibly more death?
it’s not inherently “smart” to sacrifice those significantly for the sake of a long term project
Your argument was that this hopeless trap might happen after a catastrophe and it’s so terrible that maybe it’s as bad or worse as everyone dying quickly. If it’s so terrible, in any decision-relevant sense, then it’s also smart to plot towards projects that dig humanity out of the trap.
No, sorry, I may have conveyed that wrong and mixed up two arguments. I don’t think stasis is straight up worse than extinction. For good or bad, people lived in the Middle Ages too. My point was more that if your guiding principle is “can we recover”, then there are more things than extinction to worry about. If you aspire at some kind of future in which humans grow exponentially then you won’t get it if we’re knocked back to preindustrial levels and can’t recover.
I don’t personally think that’s a great metric or goal to adopt, just following the logic to its endpoint. And I also expect that many smart people in the stasis wouldn’t plot with only that sort of long term benefit in mind. They’d seek relatively short term returns.
I see. Referring back to your argument was more an illustration of existence for this motivation. If a society forms around the motivation, at any one time in the billion years, and selects for intelligence to enable nontrivial long term institution design, that seems sufficient to escape stasis.
Eh, I think this is really splitting hairs. I have seen already multiple people using the lack of reference to climate change to dismiss the whole thing. Not every system of values places extinction on its own special pedestal (though I think in this case “biological omnicide” might be more it: unlike pandemics, AI could also kill the rest of non-human life). But in terms of expected loss of life AI could be even with those other things if you consider them more likely.
Well this is wrong, and I’m not feeling any sympathy for a view that it’s not. An eternity of posthuman growth after recovering from a civilization-spanning catastrophe really is much better than lights out, for everyone, forever.
I agree that there are a lot of people who don’t see this, and will dismiss a claim that expresses this kind of thing clearly. In mainstream comments to the statement, I’ve seen frequent claims that this is about controlling the narrative and ensuring regulatory lock-in for the big players. From the worldview where AI x-risk is undoubtedly pure fiction, the statement sounds like Very Serious People expressing Concern for the Children. Whereas if object level claims were to be stated more plainly, this interpretation would crumble, and the same worldview would be forced to admit that the people signing the claim are either insane, or have a reason for saying these things that is not Controlling the Narrative. It’s the same thing as with AI NotKillEveryoneism vs. AI Safety.
You can’t really say anything is objectively wrong when it comes to morals, but also, I generally think that evaluating the well-being of potential entities to be leads to completely nonsensical moral imperatives like the Repugnant Conclusion. Since no one experiences all of the utility at the same time, I think “expected utility probability distribution” is a much more sensible metric (as in, suppose you were born as a random sentient in a given time and place: would you be willing to take the bet?).
That said, I do think extinction is worse than just a lot of death, but that’s as a function of the people who are about to witness it and know they are the last. In addition, I think omnicide is worse than human extinction alone because I think animals and the rest of life have moral worth too. But I wouldn’t blame people for simply considering extinction as 8 billion deaths, which is still A LOT of deaths anyway. It’s a small point that’s worthless arguing. We have wide enough uncertainties on the probability of these risks anyway that we can’t really put fixed numbers to the expect harms, just vague orders of magnitude. While we may describe them as if they were numerical formulas, these evaluations really are mostly qualitative; enough uncertainty makes numbers almost pointless. Suffice to say, I think if someone considers, say, a 5% chance of nuclear war a bigger worry than a 1% chance of AI catastrophe, then I don’t think I can make a strong argument for them being dead wrong.
I agree this makes no sense, but it’s a completely different issue. That said, I think the biggest uncertainty re: X-risk remains whether AGI is really as close as some estimate it is at all. But this aspect is IMO irrelevant when judging the opportunity of actively trying to build AGI. Either it’s possible, and then it’s dangerous, or it’s still way far off, and then it’s a waste of money and precious resources and ingenuity.
It’s not that complicated. There is a sense in which these claims are objective (even as the words we use to make them are 2-place words), to the same extent as factual claims, both are seen through my own mind and reified as platonic models. Though morality is an entity that wouldn’t be channeled in the physical world without people, it actually is channeled, the same as the Moon actually is occasionally visible in the sky.
My point is not about anyone’s near term subjective experience, but about what actually happens in the distant future.
It’s their resources and ingenuity. If there is no risk, it’s not our business to tell them not to waste them.
I really, really don’t care about what happens in the distant future compared to what happens now, to humans that actually exist and feel. I especially don’t care about there being an arbitrarily high amount of humans. I don’t think a trillion humans is any better than a million as long as:
they are happy
whatever trajectory lead to those numbers didn’t include any violent or otherwise painful mass death event, or other torturous state.
There really is nothing objective about total sum utilitarianism; and in fact, as far as moral intuitions go, it’s not what most people follow at all. With things like “actually death is bad” you can make a very cogent case: people, day to day, usually don’t want to die, therefore there never is a “right moment” in which death is not a violation; if there was, people can still commit suicide anyway, thus death by old age or whatever else is just bad. That’s a case where you can invoke the “it’s not that complicate” argument IMO. Total sum utilitarianism is not; I find it a fairly absurd ethical system, ripe for exploits so ridiculous and consequences so blatantly repugnant that it really isn’t very useful at all.
I agree, amount of humans and a lot of other utilitarian aims is goodharting for bad proxies. The distinction I was gesturing at is not about amount of what happens, but about perception vs. reality. And a million humans is very different from zero anyone, even if the end was not anticipated nor perceived.
Ol, let’s consider two scenarios:
humanity goes extinct gradually and voluntarily via a last generation that simply doesn’t want to reproduce and is cared for by robots to the end, so no one suffers particularly in the process;
humanity is locked in a torturous future of trillions in inescapable torture, until the heat death of the universe.
Which is better? I would say 1 is. There are things worse than extinction (and some of them are on the table with AI too, theoretically). And anyway you should consider that with how many “low hanging fruit” resources we’ve used, there’s fair odds that if we’re knocked back into the millions by a pandemic or nuclear war now we may never pick ourselves back again. Stasis is better than immediate extinction but if you care about the long term future it’s also bad (and implies a lot more suffering because it’s a return to the past).
As a normie, I would say 1 is. Depending on how some people see things, 2 is the past—which I disagree with and at any rate would say was the generator of an immense quantity of joy, love and courage along with the opposite qualities of pain, mourning and so on.
So for me, I would indeed say that my morality puts extinction on a higher pedestal than anything else(and also am fully am against mind uploading or leaving humans with nothing to do).
Just a perspective from a small brained normie.
I mean, 2 is not the past in a purely numerical sense (I wouldn’t say we ever hit trillions of total humans). But the problem is also on the inescapable part, which assumes e.g. permanent disempowerment. That’s not a feature of the past.
I’m not sure which your “I would say 1” meant—I asked which was better, but then you said you think extinction is its own special thing. Anyway I don’t disagree that extinction is a special kind of bad, but it is IMO still in relation to people living today. I’d rather not die, but if I have to die, I’d rather die knowing the world goes on and the things I did still have purpose for someone. Extinction puts an end to that. I want to root that morality in the feelings of present people because I feel like assigning moral worth to not-yet-existing ones completely breaks any theory. For example, however many actual people exist, there’s always an infinity of potential people that don’t. In addition, it allows for justifying making existing people suffer for the sake of creating more people later (e.g. intensive factory farming of humans until we reach population 1 trillion ASAP, or however many is needed to justify the suffering inflicted via created utility), which is just absurd.
I would just say as a normie, that these extensive thought experiments of factory humans mostly don’t concern themselves to me - though I could see a lot of justification of suffering to allow humanity to exist for say, another 200 billion years. People have always suffered in some extent to do anything; and certainly having children entails some trade-offs, but existence itself is worth it.
But mostly the idea of a future without humanity, or even one without our biology, just strikes me with such abject horror that it can’t be countenanced.
I have children myself and I do wonder if this is a major difference. To imagine a world where they have no purpose drives me quite aghast and I feel this would reflect the thinking of the majority of humans.
And as such, hopefully drive policy which will, in my best futures, drive humanity forward. I see a good end as humanity spreading out into the stars and becoming inexhaustible, perhaps turning into multiple different species but ultimately, still with the struggles, suffering and triumphs of who we are.
I’ve seen arguments here and there about how the values drift from say, a hunter gatherer to us would horrify us, but I don’t see that. I see a hunter-gatherer and relate to him on a basic level. He wants food, he will compete for a mate and one day, die and his family will seek comfort from each other. My work will be different from his but I comprehend him, and as writings like Still A Pygmy show, they comprehend us.
The descriptions of things like mind uploading of accepting the extinction of humanity strike me with such wildness that it’s akin to a vast, terrifying revulsion. It’s Lovecraftian horror and I think, very far from any moral goodness to inflict upon the majority of humanity.
My point isn’t that extinction is a-ok, but rather that you could “price it” as the total sum of all human deaths (which is the lower bound, really) and there would still be a case for that. It still remains very much to avoid! I think it’s worse than that but I also don’t think it’s worse than anything. If the choice was between going extinct now or condemning future generations to lives of torture, I’d pick extinction as the lesser evil. And conversely I am also very sceptical of extremely long term reasoning, especially if used to justify present suffering. You bring up children but those are still very much real and present. You wouldn’t want them to suffer for the sake of hypothetical 40th century humans, I assume.
Depends on the degree of suffering to be totally honest- obviously I’m fine with them suffering to some extent, which is why we drive then to behave, etc so they can have better futures and sometimes conjoin them to have children so that we can continue the family line.
I think my answer actually is yes, if hypothetically their suffering allows the existence of 40th century humans, it’s pretty noble and yes, I’d be fine with it.
So supposing everything goes all right, for every more human born today there might be millions of descendants in the far future. Does that mean we have a moral duty to procreate as much as possible? I mean, the increased stress or financial toll surely don’t hold a candle to the increased future utility experienced by so many more humans!
To me it seems this sort of reasoning is bunk. Extinction is an extreme of course but every generation must look first and foremost after the people under its own direct care, and their values and interests. Potential future humans are for now just that, potential. They make no sense as moral subjects of any kind. I think this extends to extinction, which is only worse than the cumulative death of all humans insofar as current humans wish for there to be a future. Not because of the opportunity cost of how non-existing humans will not get to experience non-existing pleasures.
I apologize for being a normie but I can’t accept anything that involves non-existence of humanity and would indeed accept an enormous amount of suffering if those were the options.
Humanity went from Göbekli Tepe to today in 11K years. I doubt even after forgetting all modern learning, it would take even a million years to generate knowledge and technologies for new circumstances. I hear the biosphere can last about a billion years more. (One specific path is to use low-tech animal husbandry to produce smarter humans. This might even solve AI x-risk by making humanity saner.)
I disagree it’s that easy. It’s not a long trajectory of inevitability; like with evolution, there are constraints. Each step generally has to be on its own aligned with economic incentives at the time. See how for example steam power was first developed to fuel pumps removing water from coal mines; the engines were so inefficient that it was only cost effective if you didn’t also need to transport the coal. Now we’ve used up all surface coal and oil, not to mention screwed up the climate quite a bit for the next few millennia, conditions are different. I think technology is less uniform progression and more a mix of “easy” and “hard” events (as in the grabby aliens paper, if you’ve read it), and by exhausting those resources we’ve made things harder. I don’t think climbing back up would be guaranteed.
This IMO even if it was possible would solve nothing while potentually causing an inordinate amount of suffering. And it’s also one of those super long term investments that don’t align with almost any incentive on the short term. I say it solves nothing because intelligence wouldn’t be the bottleneck; if they had any books left lying around they’d have a road map to tech, and I really don’t think we’ve missed some obvious low tech trick that would be relevant to them. The problem is having the materials to do those things and having immediate returns.
Intelligence is also a thing that enables perceiving returns that are not immediate, as well as maintenance of more complicated institutions that align current incentives towards long term goals.
This isn’t a simple marshmallow challenge scenario. If you have a society that has needs and limited resources, it’s not inherently “smart” to sacrifice those significantly for the sake of a long term project that might e.g. not benefit anyone who’s currently living. It’s a difference in values at that point; even if you’re smart enough you can still not believe it right.
For example, suppose in 1860 everyone knew and accepted global warming as a risk. Should they, or would they, have stopped using coal and natural gas in order to save us this problem? Even if it meant lesser living standards for themselves, and possibly more death?
Your argument was that this hopeless trap might happen after a catastrophe and it’s so terrible that maybe it’s as bad or worse as everyone dying quickly. If it’s so terrible, in any decision-relevant sense, then it’s also smart to plot towards projects that dig humanity out of the trap.
No, sorry, I may have conveyed that wrong and mixed up two arguments. I don’t think stasis is straight up worse than extinction. For good or bad, people lived in the Middle Ages too. My point was more that if your guiding principle is “can we recover”, then there are more things than extinction to worry about. If you aspire at some kind of future in which humans grow exponentially then you won’t get it if we’re knocked back to preindustrial levels and can’t recover.
I don’t personally think that’s a great metric or goal to adopt, just following the logic to its endpoint. And I also expect that many smart people in the stasis wouldn’t plot with only that sort of long term benefit in mind. They’d seek relatively short term returns.
I see. Referring back to your argument was more an illustration of existence for this motivation. If a society forms around the motivation, at any one time in the billion years, and selects for intelligence to enable nontrivial long term institution design, that seems sufficient to escape stasis.