If uncertainty is just as much a reason to push as not-push, that doesn’t preclude having reasons other than uncertainty to choose one over the other which are better than a coin flip. The question becomes, what reasons ought those be?
That said, if you believe that pushing the button creates greater risk of toppling civilization than not-pushing it, great, that’s an excellent reason to not-push the button. But what you have described is not uncertainty, it is confidence in a proposition for as-yet-undisclosed reasons.
I’m starting to feel I don’t know what’s being meant by uncertainty here. It is not, to me, a reason in and of itself either way—to push the button or not. And not being a reason to do one thing or another, I find myself confused at the idea of looking for “reasons other than uncertainty”. (Or did I misunderstand that part of your post?) For me it’s just a thing I have to reason in the presence of, a fault line to be aware of and to be minimized to the best of my ability when making predictions.
For the other point, here’s some direct disclosure about why I think what I think:
There’s plenty of historical precedent for conflict over resources, and a biological immortality pill/button would do nothing to fix the underlying causes behind that phenomenon. One notable source of trouble would be the non-negligible desire people have to produce offspring. So, assuming no fundamental, species-wide changes in how people behave, if there were to be a significant drop in the global death rate, population would spike and resources would rapidly grow scarcer, leading to increased tensions, more and bloodier conflicts, accelerated erosion, etc.
To avoid the previous point, the newfound immortality would need to be balanced out by some other means. Restrictions on people’s rights to breed would be difficult to sell to the public and equally difficult to enforce. Again, it seems to me that the expectation that such restrictions would be policed successfully assumes more than the expectation for those restrictions to fail.
Am I misusing the Razor when I use it to back these claims?
Perhaps I confused the issue by introducing the word “uncertainty.” I’m happy to drop that word.
You started out by saying “The reason why perhaps not push the button: unforeseeable (?) unintended consequences.” My point is that there are unforeseen unintended consequences both to pushing and not-pushing the button, and therefore the existence of those consequences is not a reason to do either.
You are now arguing, instead, that the reason to not-push the button is that the expected consequences of pushing it are poor. You don’t actually say that they are worse than the expected consequences of not-pushing it are better, but if you believe that as well, then (as I said above) that’s an excellent reason to not-push the button.
It’s just a different reason than you started out citing.
If uncertainty is just as much a reason to push as not-push, that doesn’t preclude having reasons other than uncertainty to choose one over the other which are better than a coin flip. The question becomes, what reasons ought those be?
That said, if you believe that pushing the button creates greater risk of toppling civilization than not-pushing it, great, that’s an excellent reason to not-push the button. But what you have described is not uncertainty, it is confidence in a proposition for as-yet-undisclosed reasons.
I’m starting to feel I don’t know what’s being meant by uncertainty here. It is not, to me, a reason in and of itself either way—to push the button or not. And not being a reason to do one thing or another, I find myself confused at the idea of looking for “reasons other than uncertainty”. (Or did I misunderstand that part of your post?) For me it’s just a thing I have to reason in the presence of, a fault line to be aware of and to be minimized to the best of my ability when making predictions.
For the other point, here’s some direct disclosure about why I think what I think:
There’s plenty of historical precedent for conflict over resources, and a biological immortality pill/button would do nothing to fix the underlying causes behind that phenomenon. One notable source of trouble would be the non-negligible desire people have to produce offspring. So, assuming no fundamental, species-wide changes in how people behave, if there were to be a significant drop in the global death rate, population would spike and resources would rapidly grow scarcer, leading to increased tensions, more and bloodier conflicts, accelerated erosion, etc.
To avoid the previous point, the newfound immortality would need to be balanced out by some other means. Restrictions on people’s rights to breed would be difficult to sell to the public and equally difficult to enforce. Again, it seems to me that the expectation that such restrictions would be policed successfully assumes more than the expectation for those restrictions to fail.
Am I misusing the Razor when I use it to back these claims?
Perhaps I confused the issue by introducing the word “uncertainty.” I’m happy to drop that word.
You started out by saying “The reason why perhaps not push the button: unforeseeable (?) unintended consequences.” My point is that there are unforeseen unintended consequences both to pushing and not-pushing the button, and therefore the existence of those consequences is not a reason to do either.
You are now arguing, instead, that the reason to not-push the button is that the expected consequences of pushing it are poor. You don’t actually say that they are worse than the expected consequences of not-pushing it are better, but if you believe that as well, then (as I said above) that’s an excellent reason to not-push the button.
It’s just a different reason than you started out citing.