could win $1 billion, and I am risk and time neutral
Who has constant marginal utility of money up to $1,000,000,000?
The biggest problem with these schemes is that, the closer to infinitesimal probability, and thus usually to infinitesimal quality or quantity of evidence, one gets, the closer to infinity the possible extreme-consequence schemes one can dream up
Consequences can’t be inflated to make up for arbitrarily low probabilities. Consequences are connected: if averting human extinction by proliferation of morally valueless machinery is super valuable because of future generations, then the gains of averting human extinction by asteroids, or engineered diseases, will be on the same scale.
It cost roughly $100 million to launch a big search for asteroids that has now located 90%+ of large (dinosaur-killer size) asteroids, and such big impacts happen every hundred million years or so, accompanied by mass extinctions, particularly of large animals. If working on AI, before AI is clearly near and better understood, had a lower probability of averting x-risk reduction per unit cost than asteroid defense, or adding to the multibillion dollar annual anti-nuclear proliferation or biosecurity budgets, or some other intervention, then it would lose.
“Some nonzero chance” isn’t enough, it has to be a “chance per cost better than the alternatives.”
I should have said something about marginal utility there. Doesn’t change the three tests for a Pascal scam though.
The asteroid threat is a good example of a low-probability disaster that is probably not a Pascal scam. On point (1) it is fairly lottery-like, insofar as asteroid orbits are relatively predictable—the unknowns are primarily “known unknowns”, being deviations from very simple functions—so it’s possible to compute odds from actual data, rather than merely guessing them from a morass of “unknown unknowns”. It passes test (2) as we have good ways to simulate with reasonable accuracy and (at some expense, only if needed) actually test solutions. And best of all it passes test (3) -- experiments or observations can be done to improve our information about those odds. Most of the funding has, quite properly, gone to those empirical observations, not towards speculating about solutions before the problem has been well characterized.
Alas, most alleged futuristic threats and hopes don’t fall into such a clean category: the evidence is hopelessly equivocal (even if declared with a false certainty) or missing, and those advocating that our attention and other resources be devoted to them usually fail to propose experiments or observations that would imrove that evidence and thus reduce our uncertainty to levels that would distinguish them from the near-infinity of plausible disaster scenarios we could imagine. (Even with just the robot apocalypse, there are a near-infinity of ways one can plausibly imagine it playing out). Same, generally speaking, for future diseases—there may well be a threat lying in there, but we don’t have any general ways of clearly characterizing specifically what those threats might be and thus distinguishing them from the near-infinity of threats we could plausibly imagine (again generally speaking—there are obviously some well-characterized specific diseases for which we do have such knowledge).
Who has constant marginal utility of money up to $1,000,000,000?
Who has constant marginal utility of people up to 1000000000 people? (To answer the rhetorical question—no one.)
Consequences can’t be inflated to make up for arbitrarily low probabilities. Consequences are connected: if averting human extinction by proliferation of morally valueless machinery is super valuable because of future generations, then the gains of averting human extinction by asteroids, or engineered diseases, will be on the same scale.
This reminds of of Jaynes and transformation groups—establish your prior based on transforms that leave you with the same problem. I find this makes short work of arbitrary assertions that want to be taken seriously.
Who has constant marginal utility of money up to $1,000,000,000?
Someone who’s already got many billions? (But then again, for such a person a 1/1000 chance of getting one more billion wouldn’t even be worth the time spent to participate in such a lottery, I suppose.)
I do, in that there are nonfatal actions that i would not take in exchange for that much money. Of course, at numbers over several hundred thousand, money loses unit utility very fast. One billion dollars has significantly less than one thousand times the value to me of one million dollars, because the things I can buy with a billion dollars are less than one thousand times as valuable to me as the things I can buy with a million.
Who has constant marginal utility of money up to $1,000,000,000?
Consequences can’t be inflated to make up for arbitrarily low probabilities. Consequences are connected: if averting human extinction by proliferation of morally valueless machinery is super valuable because of future generations, then the gains of averting human extinction by asteroids, or engineered diseases, will be on the same scale.
It cost roughly $100 million to launch a big search for asteroids that has now located 90%+ of large (dinosaur-killer size) asteroids, and such big impacts happen every hundred million years or so, accompanied by mass extinctions, particularly of large animals. If working on AI, before AI is clearly near and better understood, had a lower probability of averting x-risk reduction per unit cost than asteroid defense, or adding to the multibillion dollar annual anti-nuclear proliferation or biosecurity budgets, or some other intervention, then it would lose.
“Some nonzero chance” isn’t enough, it has to be a “chance per cost better than the alternatives.”
I should have said something about marginal utility there. Doesn’t change the three tests for a Pascal scam though.
The asteroid threat is a good example of a low-probability disaster that is probably not a Pascal scam. On point (1) it is fairly lottery-like, insofar as asteroid orbits are relatively predictable—the unknowns are primarily “known unknowns”, being deviations from very simple functions—so it’s possible to compute odds from actual data, rather than merely guessing them from a morass of “unknown unknowns”. It passes test (2) as we have good ways to simulate with reasonable accuracy and (at some expense, only if needed) actually test solutions. And best of all it passes test (3) -- experiments or observations can be done to improve our information about those odds. Most of the funding has, quite properly, gone to those empirical observations, not towards speculating about solutions before the problem has been well characterized.
Alas, most alleged futuristic threats and hopes don’t fall into such a clean category: the evidence is hopelessly equivocal (even if declared with a false certainty) or missing, and those advocating that our attention and other resources be devoted to them usually fail to propose experiments or observations that would imrove that evidence and thus reduce our uncertainty to levels that would distinguish them from the near-infinity of plausible disaster scenarios we could imagine. (Even with just the robot apocalypse, there are a near-infinity of ways one can plausibly imagine it playing out). Same, generally speaking, for future diseases—there may well be a threat lying in there, but we don’t have any general ways of clearly characterizing specifically what those threats might be and thus distinguishing them from the near-infinity of threats we could plausibly imagine (again generally speaking—there are obviously some well-characterized specific diseases for which we do have such knowledge).
Who has constant marginal utility of people up to 1000000000 people? (To answer the rhetorical question—no one.)
This reminds of of Jaynes and transformation groups—establish your prior based on transforms that leave you with the same problem. I find this makes short work of arbitrary assertions that want to be taken seriously.
Someone who’s already got many billions? (But then again, for such a person a 1/1000 chance of getting one more billion wouldn’t even be worth the time spent to participate in such a lottery, I suppose.)
From zero up to $1,000,000,000.
I do, in that there are nonfatal actions that i would not take in exchange for that much money. Of course, at numbers over several hundred thousand, money loses unit utility very fast. One billion dollars has significantly less than one thousand times the value to me of one million dollars, because the things I can buy with a billion dollars are less than one thousand times as valuable to me as the things I can buy with a million.