The reason why perhaps not push the button: unforeseeable (?) unintended consequences.
I expect point number 1 would weigh heavily in anyone’s mind when making the choice, but it might turn out to be a harmfully biased option, assuming it even works. As to point two: in the absence of diseases and aging, the population would hit its limits along some other front. Starvation is only the obvious end of the line; the catch is what we might expect to see on the way there, such as rising global tensions, civil unrest, wars (gloves off or otherwise), accelerated environmental decay—all the things that may not seem like such pressing problems now. We could with perfect seriousness ask the question whether the current state of affairs isn’t safer for humanity at large than after pressing the button. (I’ll confess: I would use people’s answers to the original question mostly as a proxy measurement for their general optimism.)
Frankly, I’d argue the exact reverse of point 3. IMO, it takes heavy speculation to avoid any of the risks I mentioned, and speculating on the positive effects is what seems questionable. The only immediate species-wide benefit would be that world-class expertise on all fields of science suddenly stops pouring out of the world at a steady pace. Anything like “people will care more about the future” supposes fairly fundamental changes in how people think and behave. I expect birth control regulations would be passed, but would you expect to see them work? How would you expect to see them enforced? My guess is: not in worldwide peace and mutual harmony.
Are you also considering the unforeseen unintended consequences of not pushing the button and concluding that they are preferable? (If so, can you clarify on what basis?)
Without that, it seems to me that uncertainty about the future is just as much a reason to push as to not-push, and therefore neither decision can be justified based on such uncertainty.
Yes. It’s not that the world-as-is is a paradise and we shouldn’t do anything to change it, but pushing the button seems like it would rock the boat far more than not pushing it. Where by “rock the boat” I mean “significantly increase the risk of toppling the civilization (and possibly the species, and possibly the planet) in exchange for the short-term warm fuzzies of having fewer people die in the first few years following our decision.”
Without that, it seems to me that uncertainty about the future is just as much a reason to push as to not-push, and therefore neither decision can be justified based on such uncertainty.
Uncertainty being just as much a reason to push as not-push seems like another way of saying we might as well flip a coin, which doesn’t seem right. Now, I’m not claiming to be running some kind of oracle-like future extrapolation algorithm where I’m confident in saying “catastrophe X will break out in place Y at time Z”, but assuming that making the best possible choice gets higher priority than avoiding personal liability, the stakes in this question are high enough that we should choose something. Something more than a coin flip.
If uncertainty is just as much a reason to push as not-push, that doesn’t preclude having reasons other than uncertainty to choose one over the other which are better than a coin flip. The question becomes, what reasons ought those be?
That said, if you believe that pushing the button creates greater risk of toppling civilization than not-pushing it, great, that’s an excellent reason to not-push the button. But what you have described is not uncertainty, it is confidence in a proposition for as-yet-undisclosed reasons.
I’m starting to feel I don’t know what’s being meant by uncertainty here. It is not, to me, a reason in and of itself either way—to push the button or not. And not being a reason to do one thing or another, I find myself confused at the idea of looking for “reasons other than uncertainty”. (Or did I misunderstand that part of your post?) For me it’s just a thing I have to reason in the presence of, a fault line to be aware of and to be minimized to the best of my ability when making predictions.
For the other point, here’s some direct disclosure about why I think what I think:
There’s plenty of historical precedent for conflict over resources, and a biological immortality pill/button would do nothing to fix the underlying causes behind that phenomenon. One notable source of trouble would be the non-negligible desire people have to produce offspring. So, assuming no fundamental, species-wide changes in how people behave, if there were to be a significant drop in the global death rate, population would spike and resources would rapidly grow scarcer, leading to increased tensions, more and bloodier conflicts, accelerated erosion, etc.
To avoid the previous point, the newfound immortality would need to be balanced out by some other means. Restrictions on people’s rights to breed would be difficult to sell to the public and equally difficult to enforce. Again, it seems to me that the expectation that such restrictions would be policed successfully assumes more than the expectation for those restrictions to fail.
Am I misusing the Razor when I use it to back these claims?
Perhaps I confused the issue by introducing the word “uncertainty.” I’m happy to drop that word.
You started out by saying “The reason why perhaps not push the button: unforeseeable (?) unintended consequences.” My point is that there are unforeseen unintended consequences both to pushing and not-pushing the button, and therefore the existence of those consequences is not a reason to do either.
You are now arguing, instead, that the reason to not-push the button is that the expected consequences of pushing it are poor. You don’t actually say that they are worse than the expected consequences of not-pushing it are better, but if you believe that as well, then (as I said above) that’s an excellent reason to not-push the button.
It’s just a different reason than you started out citing.
The reason why perhaps not push the button: unforeseeable (?) unintended consequences.
I expect point number 1 would weigh heavily in anyone’s mind when making the choice, but it might turn out to be a harmfully biased option, assuming it even works. As to point two: in the absence of diseases and aging, the population would hit its limits along some other front. Starvation is only the obvious end of the line; the catch is what we might expect to see on the way there, such as rising global tensions, civil unrest, wars (gloves off or otherwise), accelerated environmental decay—all the things that may not seem like such pressing problems now. We could with perfect seriousness ask the question whether the current state of affairs isn’t safer for humanity at large than after pressing the button. (I’ll confess: I would use people’s answers to the original question mostly as a proxy measurement for their general optimism.)
Frankly, I’d argue the exact reverse of point 3. IMO, it takes heavy speculation to avoid any of the risks I mentioned, and speculating on the positive effects is what seems questionable. The only immediate species-wide benefit would be that world-class expertise on all fields of science suddenly stops pouring out of the world at a steady pace. Anything like “people will care more about the future” supposes fairly fundamental changes in how people think and behave. I expect birth control regulations would be passed, but would you expect to see them work? How would you expect to see them enforced? My guess is: not in worldwide peace and mutual harmony.
Are you also considering the unforeseen unintended consequences of not pushing the button and concluding that they are preferable? (If so, can you clarify on what basis?)
Without that, it seems to me that uncertainty about the future is just as much a reason to push as to not-push, and therefore neither decision can be justified based on such uncertainty.
Yes. It’s not that the world-as-is is a paradise and we shouldn’t do anything to change it, but pushing the button seems like it would rock the boat far more than not pushing it. Where by “rock the boat” I mean “significantly increase the risk of toppling the civilization (and possibly the species, and possibly the planet) in exchange for the short-term warm fuzzies of having fewer people die in the first few years following our decision.”
Uncertainty being just as much a reason to push as not-push seems like another way of saying we might as well flip a coin, which doesn’t seem right. Now, I’m not claiming to be running some kind of oracle-like future extrapolation algorithm where I’m confident in saying “catastrophe X will break out in place Y at time Z”, but assuming that making the best possible choice gets higher priority than avoiding personal liability, the stakes in this question are high enough that we should choose something. Something more than a coin flip.
If uncertainty is just as much a reason to push as not-push, that doesn’t preclude having reasons other than uncertainty to choose one over the other which are better than a coin flip. The question becomes, what reasons ought those be?
That said, if you believe that pushing the button creates greater risk of toppling civilization than not-pushing it, great, that’s an excellent reason to not-push the button. But what you have described is not uncertainty, it is confidence in a proposition for as-yet-undisclosed reasons.
I’m starting to feel I don’t know what’s being meant by uncertainty here. It is not, to me, a reason in and of itself either way—to push the button or not. And not being a reason to do one thing or another, I find myself confused at the idea of looking for “reasons other than uncertainty”. (Or did I misunderstand that part of your post?) For me it’s just a thing I have to reason in the presence of, a fault line to be aware of and to be minimized to the best of my ability when making predictions.
For the other point, here’s some direct disclosure about why I think what I think:
There’s plenty of historical precedent for conflict over resources, and a biological immortality pill/button would do nothing to fix the underlying causes behind that phenomenon. One notable source of trouble would be the non-negligible desire people have to produce offspring. So, assuming no fundamental, species-wide changes in how people behave, if there were to be a significant drop in the global death rate, population would spike and resources would rapidly grow scarcer, leading to increased tensions, more and bloodier conflicts, accelerated erosion, etc.
To avoid the previous point, the newfound immortality would need to be balanced out by some other means. Restrictions on people’s rights to breed would be difficult to sell to the public and equally difficult to enforce. Again, it seems to me that the expectation that such restrictions would be policed successfully assumes more than the expectation for those restrictions to fail.
Am I misusing the Razor when I use it to back these claims?
Perhaps I confused the issue by introducing the word “uncertainty.” I’m happy to drop that word.
You started out by saying “The reason why perhaps not push the button: unforeseeable (?) unintended consequences.” My point is that there are unforeseen unintended consequences both to pushing and not-pushing the button, and therefore the existence of those consequences is not a reason to do either.
You are now arguing, instead, that the reason to not-push the button is that the expected consequences of pushing it are poor. You don’t actually say that they are worse than the expected consequences of not-pushing it are better, but if you believe that as well, then (as I said above) that’s an excellent reason to not-push the button.
It’s just a different reason than you started out citing.