I like this comment. Where the intuition pump breaks down is that you haven’t realistically described what Alice would do with a million dollars, knowing that the Earth has a 50% chance of being destroyed tomorrow. She’d probably spend it trying to diminish that probability—a more realistic possibility that total confidence in 50% destruction and total helplessness to do anything about it.
If we presume Alice can diminish the probability of doom somewhat by spending her million dollars, then it could easily have sufficient positive expected value to make the trade with Bob clearly beneficial.
Why would you expect her to be able to diminish the probability of doom by spending her million dollars? Situations where someone can have a detectable impact on global-scale problems by spending only a million dollars are extraordinarily rare. It seems doubtful that there are even ways to spend a million dollars on decreasing AI xrisk now when timelines are measured in years (as the projects working on it do not seem to be meaningfully funding-constrained), much less if you expected the xrisk to materialize with 50% probability tomorrow (less time than it takes to e.g. get a team of researchers together).
I agree it’s rare to have a global impact with a million dollars. But if you’re 50% confident the world will be destroyed tomorrow, that implies you have some sort of specific knowledge about the mechanism of destruction. The reason it’s hard to spend a million dollars to have a big impact is often because of a lack of such specific information.
But if you are adding the stipulation that there’s nothing Alice can do to affect the probability of doom, then I agree that your math checks out.
I like this comment. Where the intuition pump breaks down is that you haven’t realistically described what Alice would do with a million dollars, knowing that the Earth has a 50% chance of being destroyed tomorrow. She’d probably spend it trying to diminish that probability—a more realistic possibility that total confidence in 50% destruction and total helplessness to do anything about it.
If we presume Alice can diminish the probability of doom somewhat by spending her million dollars, then it could easily have sufficient positive expected value to make the trade with Bob clearly beneficial.
Why would you expect her to be able to diminish the probability of doom by spending her million dollars? Situations where someone can have a detectable impact on global-scale problems by spending only a million dollars are extraordinarily rare. It seems doubtful that there are even ways to spend a million dollars on decreasing AI xrisk now when timelines are measured in years (as the projects working on it do not seem to be meaningfully funding-constrained), much less if you expected the xrisk to materialize with 50% probability tomorrow (less time than it takes to e.g. get a team of researchers together).
I agree it’s rare to have a global impact with a million dollars. But if you’re 50% confident the world will be destroyed tomorrow, that implies you have some sort of specific knowledge about the mechanism of destruction. The reason it’s hard to spend a million dollars to have a big impact is often because of a lack of such specific information.
But if you are adding the stipulation that there’s nothing Alice can do to affect the probability of doom, then I agree that your math checks out.