If the probability of AI (or grey goo, or some other exotic risk) existential risks were low enough (neglecting the creation of hell-worlds with negative utility), then you could neglect in favor of those other risks.
Asteroids don’t lead to a scenario in which a paper-clipping AI takes over the entire light-cone and turns it into paper clips, preventing any interesting life from ever arising anywhere, so they aren’t quite comparable.
Still, your point only makes me wonder how we can justify not devoting 10% of GDP to deflecting asteroids. You say that we don’t need to put all resources into preventing unfriendly AI, because we have other things to prevent. But why do anything productive? How do you compare the utility of preventing possible annihilation to the utility of improvements in life? Why put any effort into any of the mundane things that we put almost all of our efforts into? (Particularly if happiness is based on the derivative of, rather than absolute, quality of life. You can’t really get happier, on average; but action can lead to destruction. Happiness is problematic as a value for transhumans.)
This sounds like a straw man, but it might not be. We might just not have reached (or acclimatized ourselves to) the complexity level at which the odds of self-annihilation should begin to dominate our actions. I suspect that the probability of self-annihilation increases with complexity. Rather like how the probability of an individual going mad may increase with their intelligence. (I don’t think that frogs go insane as easily as humans do, though it would be hard to be sure.) Depending how this scales, it could mean that life is inherently doomed. But that would result in a universe where we were unlikely to encounter other intelligent life… uh...
It doesn’t even need to scale that badly; if extinction events have a power law (they do), there are parameters for which a system can survive indefinitely, and very similar parameters for which it has a finite expected lifespan. Would be nice to know where we stand. The creation of AI is just one more point on this road of increasing complexity, which may lead inevitably to instability and destruction.
I suppose the only answer is to say that destruction is acceptable (and possibly inevitable); total area under the utility curve is what counts. Wanting an interesting world may be like deciding to smoke and drink and die young—and it may be the right decision. The AIs of the future may decide that dooming all life in the long run is worth it.
In short, the answer to “Eliezer’s wager” may be that we have an irrational bias against destroying the universe.
But then, deciding what are acceptable risk levels in the next century depends on knowing more about cosmology, the end of the universe, and the total amount of computation that the universe is capable of.
I think that solving aging would change people’s utility calculations in a way that would discount the future less, bringing them more in line with the “correct” utility computations.
Re. AI hell-worlds: SIAI should put “I have no mouth, and I must scream” by Harlan Ellison on its list of required reading.
Still, your point only makes me wonder how we can justify not devoting 10% of GDP to deflecting asteroids. You say that we don’t need to put all resources into preventing unfriendly AI, because we have other things to prevent. But why do anything productive? How do you compare the utility of preventing possible annihilation to the utility of improvements in life? Why put any effort into any of the mundane things that we put almost all of our efforts into? (Particularly if happiness is based on the derivative of, rather than absolute, quality of life. You can’t really get happier, on average; but action can lead to destruction. Happiness is problematic as a value for transhumans.)
This sounds like a straw man, but it might not be. We might just not have reached (or acclimatized ourselves to) the complexity level at which the odds of self-annihilation should begin to dominate our actions. I suspect that the probability of self-annihilation increases with complexity. Rather like how the probability of an individual going mad may increase with their intelligence. (I don’t think that frogs go insane as easily as humans do, though it would be hard to be sure.) Depending how this scales, it could mean that life is inherently doomed. But that would result in a universe where we were unlikely to encounter other intelligent life… uh...
It doesn’t even need to scale that badly; if extinction events have a power law (they do), there are parameters for which a system can survive indefinitely, and very similar parameters for which it has a finite expected lifespan. Would be nice to know where we stand. The creation of AI is just one more point on this road of increasing complexity, which may lead inevitably to instability and destruction.
I suppose the only answer is to say that destruction is acceptable (and possibly inevitable); total area under the utility curve is what counts. Wanting an interesting world may be like deciding to smoke and drink and die young—and it may be the right decision. The AIs of the future may decide that dooming all life in the long run is worth it.
In short, the answer to “Eliezer’s wager” may be that we have an irrational bias against destroying the universe.
But then, deciding what are acceptable risk levels in the next century depends on knowing more about cosmology, the end of the universe, and the total amount of computation that the universe is capable of.
I think that solving aging would change people’s utility calculations in a way that would discount the future less, bringing them more in line with the “correct” utility computations.
Re. AI hell-worlds: SIAI should put “I have no mouth, and I must scream” by Harlan Ellison on its list of required reading.