Assuming rational agents with a reasonable level of altruism (by which I mean, incorporating the needs of other people and future generations into their own utility functions, to a similar degree to what we consider “decent people” to do today)...
If such a person figures that getting rid of the Nazis or the Daleks or whoever the threat of the day is, is worth a tiny risk of bringing about the end of the world, and their reasoning is completely rational and valid and altrustic (I won’t say “unselfish” for reasons discussed elsewhere in this thread) and far-sighted (not discounting future generations too much)...
… then they’re right, aren’t they?
If the guys behind the Trinity test weighed the negative utility of the Axis taking over the world, presumably with the end result of boots stamping on human faces forever, and determined that the 3⁄1,000,000 chance of ending all human life was worth preventing this future from coming to pass, then couldn’t Queen Victoria perform the same calculations, and conclude “Good heavens. Nazis, you say? Spreading their horrible fascism in my empire? Never! I do hope those plucky Americans manage to build their bomb in time. Tiny chance of destroying the world? Better they take that risk than let fascism rule the world, I say!”
If the utility calculations performed regarding the Trinity test were rational, altrustic and reasonably far-sighted, then they would have been equally valid if performed at any other time in history. If we apply a future discounting factor of e^-kt, then that factor would apply equally to all elements in the utility calculation. If the net utility of the test were positive in 1945, then it should have been positive at all points in history before then. If President Truman (rationally, altrustically, far-sightedly) approved of the test, then so should Queen Victoria, Julius Caesar and Hammurabi have, given sufficient information. Either the utility calculations for the test were right, or they weren’t.
If they were right, then the problem stops being “Oh no, future generations are going to destroy the world even if they’re sensible and altruistic!”, and starts being “Oh no, a horrible regime might take over the world! Let’s hope someone creates a superweapon to stop them, and damn the risk!”
If they were wrong, then the assumption that the ones performing the calculation were rational, altrustic and far-sighted is wrong. Taking these one by one:
1) The world might be destroyed by someone making an irrational decision. No surprises there. All we can do is strive to raise the general level of rationality in the world, at least among people with the power to destroy the world.
2) The world might be destroyed by someone with only his own interests at heart. So basically we might get stuck with Dr Evil. We can’t do a lot about that either.
3) The world might be destroyed by someone acting rationally and altrustically for his own generation, but who discounts future generations too much (i.e. his value of k in the discounting factor is much larger than ours). This seems to be the crux of the problem. What is the “proper” value of k? It should probably depend on how much longer humans are going to be around, for reasons unrelated to the question at hand. If the world really is going to end in 2012, then every dollar spent on preventing global warming should have been spent on alleviating short-term suffering all over the world, and the proper value for k is very large. If we really are going to be here for millions of years, then we should be exceptionally careful with every resource (both material and negentropy-based) we consume, and k should be very small. Without this knowledge, of course, it’s very difficult to determine what k should be.
That may be the way to avoid a well-meaning scientist wiping out all human life—find out how much longer we have as a species, and then campaign that everyone should live their lives accordingly. Then, the only existential risks that would be implemented are the ones that are actually, seriously, truly, incontrovertibly, provably worth it.
You’ve sidestepped my argument, which is that just the existential risks that are worth it are enough to guarantee destroying the universe in a cosmologically short time.
Assuming rational agents with a reasonable level of altruism (by which I mean, incorporating the needs of other people and future generations into their own utility functions, to a similar degree to what we consider “decent people” to do today)...
If such a person figures that getting rid of the Nazis or the Daleks or whoever the threat of the day is, is worth a tiny risk of bringing about the end of the world, and their reasoning is completely rational and valid and altrustic (I won’t say “unselfish” for reasons discussed elsewhere in this thread) and far-sighted (not discounting future generations too much)...
… then they’re right, aren’t they?
If the guys behind the Trinity test weighed the negative utility of the Axis taking over the world, presumably with the end result of boots stamping on human faces forever, and determined that the 3⁄1,000,000 chance of ending all human life was worth preventing this future from coming to pass, then couldn’t Queen Victoria perform the same calculations, and conclude “Good heavens. Nazis, you say? Spreading their horrible fascism in my empire? Never! I do hope those plucky Americans manage to build their bomb in time. Tiny chance of destroying the world? Better they take that risk than let fascism rule the world, I say!”
If the utility calculations performed regarding the Trinity test were rational, altrustic and reasonably far-sighted, then they would have been equally valid if performed at any other time in history. If we apply a future discounting factor of e^-kt, then that factor would apply equally to all elements in the utility calculation. If the net utility of the test were positive in 1945, then it should have been positive at all points in history before then. If President Truman (rationally, altrustically, far-sightedly) approved of the test, then so should Queen Victoria, Julius Caesar and Hammurabi have, given sufficient information. Either the utility calculations for the test were right, or they weren’t.
If they were right, then the problem stops being “Oh no, future generations are going to destroy the world even if they’re sensible and altruistic!”, and starts being “Oh no, a horrible regime might take over the world! Let’s hope someone creates a superweapon to stop them, and damn the risk!”
If they were wrong, then the assumption that the ones performing the calculation were rational, altrustic and far-sighted is wrong. Taking these one by one:
1) The world might be destroyed by someone making an irrational decision. No surprises there. All we can do is strive to raise the general level of rationality in the world, at least among people with the power to destroy the world.
2) The world might be destroyed by someone with only his own interests at heart. So basically we might get stuck with Dr Evil. We can’t do a lot about that either.
3) The world might be destroyed by someone acting rationally and altrustically for his own generation, but who discounts future generations too much (i.e. his value of k in the discounting factor is much larger than ours). This seems to be the crux of the problem. What is the “proper” value of k? It should probably depend on how much longer humans are going to be around, for reasons unrelated to the question at hand. If the world really is going to end in 2012, then every dollar spent on preventing global warming should have been spent on alleviating short-term suffering all over the world, and the proper value for k is very large. If we really are going to be here for millions of years, then we should be exceptionally careful with every resource (both material and negentropy-based) we consume, and k should be very small. Without this knowledge, of course, it’s very difficult to determine what k should be.
That may be the way to avoid a well-meaning scientist wiping out all human life—find out how much longer we have as a species, and then campaign that everyone should live their lives accordingly. Then, the only existential risks that would be implemented are the ones that are actually, seriously, truly, incontrovertibly, provably worth it.
You’ve sidestepped my argument, which is that just the existential risks that are worth it are enough to guarantee destroying the universe in a cosmologically short time.