edit: And in case it’s not clear, the point is that Pascal’s wager does not depend on the misestimate of probability being low. Any finite variation requires that the probability is high enough .
Likewise, here (linked from the thread I linked) you have both: a prior which is silly high (1 in 2000), and big impact (7 billion lives).
edit: whoops. 1 in 2000 and general talk of low probabilities is in the thread, not in the video. In the video she just goes ahead assigning arbitrary 30% probability to picking an organization with which we live and without which we die, which is obviously so high that much like Pascal’s wager going from 0.5 probability to “the probability could be low, the impact is still infinite!”, so does the LW discussion of this video progress from un-defensible 30% to it doesn’t matter. Let’s picture a Pascal Scam: someone says that there is 50% probability (mostly via ignorance) that unless they are given a lot of money, 10^30 people will die. The audience doesn’t buy 50% probability, but it does still pay up.
(Reply to edit: In the presentation that 30% is one probability in a chain, not an absolute value. Stop with the willful misrepresentations, please.)
From the article:
However, Pascal realizes that the value of 1⁄2 actually plays no real role in the argument, thanks to (2). This brings us to the third, and by far the most important, of his arguments...
If there were a 0.5 probability that the Christian God existed, the wager would make a fuckton more sense. Today we think Pascal’s Wager is a logical fallacy rather than a mere mistaken probability estimate only because later versions of the argument were put forward for lower probabilities, and/or because Pascal went on to argeu that it would carry for lower probabilities.
If the video is where is the actual instance of Pascal’s Wager is being offered in support of SIAI, then it would have been better to link it directly. I also hate video because it’s not searchable, but I can hardly blame you for that, so I will try scanning it.
Before scanning, I precommit to renouncing, abjuring, and distancing MIRI from the argument in the video if it argues for no probability higher than 1 in 2000 of FAI saving the world, because I myself do not positively engage in long-term projects on the basis of probabilities that low (though I sometimes avoid doing things for dangers that small). There ought to be at least one x-risk effort with a greater probability of saving the world than this—or if not, you ought to make one. If you know yourself for an NPC and that you cannot start such a project yourself, you ought to throw money at anyone launching a new project whose probability of saving the world is not known to be this small. 7 billion is also a stupidly low number—x-risk dominates all other optimal philanthropy because of the value of future galaxies, not because of the value of present-day lives. The confluence of these two numbers makes me strongly suspect that, if they are not misquotes in some sense, both low numbers were (presumably unconsciously) chosen to make the ‘lives saved per dollar’ look like a reasonable number in human terms, when in fact the x-risk calculus is such that all utilons should be measured in Probability of OK Outcome because the value of future galaxies stomps everything else.
Attempts to argue for large probabilities that FAI is important, and then tiny probabilities that MIRI is instrumental in creating FAI, will also strike me as a wrongheaded attempt at modesty. On a very large scale, if you think FAI stands a serious chance of saving the world, then humanity should dump a bunch of effort into it, and if nobody’s dumping effort into it then you should dump more effort than currently into it. Calculations of marginal impact in POKO/dollar are sensible for comparing two x-risk mitigation efforts in demand of money, but in this case each marginal added dollar is rightly going to account for a very tiny slice of probability, and this is not Pascal’s Wager. Large efforts with a success-or-failure criterion are rightly, justly, and unavoidably going to end up with small marginal probabilities per added unit effort. It would only be Pascal’s Wager if the whole route-to-humanity-being-OK were assigned a tiny probability, and then a large payoff used to shut down further discussion of whether the next unit of effort should go there or to a different x-risk.
(Scans video.)
This video is primarily about value of information estimates.
“Principle 2: Don’t trust your estimates too much. Estimates in, estimates out.” Good.
Application to the Singularity… It’s explicitly stated that the value is 7 billion lives plus all future generations, which is better—a lower bound is being set, not an estimated exact value.
Final calculation shown:
Probability of eventual AI: 80%
Probability AI with no safeguards will kill us: 80%
(Both of these numbers strike me as a bit suspicious in their apparent medianness which is something that often happens when an argument is unconsciously optimized for sounding reasonable. Really, the probability that AI happens at all, ever, is 80%? Isn’t that a bit low? Is this supposed to be factoring in the probability of nanotechnological warfare wiping out humanity before then, or something? Certainly, AI being possible in principle should have a much more extreme probability than 80%. And a 20% probability of an unsafed AI not killing you sounds like quite an amazing bonanza to get for free. But carrying on...)
Probability we manage safeguards: 40%
(No comment.)
Probability current work is why we manage: 30%
(Arguably too low. Even if MIRI crashes and somebody else carries on successfully, I’d estimate a pretty high probability that their causal pathway there will have had something to do with MIRI. It is difficult to overstate just how much this problem was not on the horizon, at all, of work anyone could actually go out and do twenty years ago.)
Net probabilty: 7%.
This is not necessarily a result I’d agree with, but it’s not a case of Pascal’s Wager on its face. 7% probabilities of large payoffs are a reasonable cause of positive action in sane people; it’s why you would do an Internet startup.
(continues scanning video)
I do not see any slide showing a probability of 1 in 2000. Was this spoken aloud? At what time in the episode?
(Arguably too low. Even if MIRI crashes and somebody else carries on, I’d estimate a pretty high probability that their causal pathway there will have had something to do with MIRI. It is difficult to overstate just how much this problem was not on the horizon, at all, of work anyone could actually go out and do before MIRI.)
It doesn’t merely have to have something to do with MIRI, it must be the case that without funding MIRI we all die, and with funding MIRI, we don’t, and this is precisely the sort of thing that should have very low probability if MIRI is not demonstrably impressive at doing something else.
I do not see any slide showing a probability of 1 in 2000. Was this spoken aloud? At what time in the episode?
Hmm. It is mentioned here and other commenters there likewise talk of low probabilities. I guess I just couldn’t quite imagine someone seriously putting a non small probability on “with MIRI we live, without we die” aspect of it. Startups have quite small probability of success, even without attempting to do the impossible.
edit: And of course what actually matters is donor’s probability.
(Arguably too low. Even if MIRI crashes and somebody else carries on successfully, I’d estimate a pretty high probability that their causal pathway there will have had something to do with MIRI. It is difficult to overstate just how much this problem was not on the horizon, at all, of work anyone could actually go out and do twenty years ago.)
For this to work out to 7%, a donor would need 30% probability that their choice of the organization to donate to is such that with this organization we live, and without, we die.
What donor can be so confident in their choice? Is Thiel this confident? Of course not, he only puts in a small fraction of his income, and he puts more into something like this. By the way I am rather curious about your opinion on this project.
In original Pascal’s wager, he had a prior of 0.5 for existence of God.
edit: And in case it’s not clear, the point is that Pascal’s wager does not depend on the misestimate of probability being low. Any finite variation requires that the probability is high enough .
Likewise, here (linked from the thread I linked) you have both: a prior which is silly high (1 in 2000), and big impact (7 billion lives).
edit: whoops. 1 in 2000 and general talk of low probabilities is in the thread, not in the video. In the video she just goes ahead assigning arbitrary 30% probability to picking an organization with which we live and without which we die, which is obviously so high that much like Pascal’s wager going from 0.5 probability to “the probability could be low, the impact is still infinite!”, so does the LW discussion of this video progress from un-defensible 30% to it doesn’t matter. Let’s picture a Pascal Scam: someone says that there is 50% probability (mostly via ignorance) that unless they are given a lot of money, 10^30 people will die. The audience doesn’t buy 50% probability, but it does still pay up.
(Reply to edit: In the presentation that 30% is one probability in a chain, not an absolute value. Stop with the willful misrepresentations, please.)
From the article:
If there were a 0.5 probability that the Christian God existed, the wager would make a fuckton more sense. Today we think Pascal’s Wager is a logical fallacy rather than a mere mistaken probability estimate only because later versions of the argument were put forward for lower probabilities, and/or because Pascal went on to argeu that it would carry for lower probabilities.
If the video is where is the actual instance of Pascal’s Wager is being offered in support of SIAI, then it would have been better to link it directly. I also hate video because it’s not searchable, but I can hardly blame you for that, so I will try scanning it.
Before scanning, I precommit to renouncing, abjuring, and distancing MIRI from the argument in the video if it argues for no probability higher than 1 in 2000 of FAI saving the world, because I myself do not positively engage in long-term projects on the basis of probabilities that low (though I sometimes avoid doing things for dangers that small). There ought to be at least one x-risk effort with a greater probability of saving the world than this—or if not, you ought to make one. If you know yourself for an NPC and that you cannot start such a project yourself, you ought to throw money at anyone launching a new project whose probability of saving the world is not known to be this small. 7 billion is also a stupidly low number—x-risk dominates all other optimal philanthropy because of the value of future galaxies, not because of the value of present-day lives. The confluence of these two numbers makes me strongly suspect that, if they are not misquotes in some sense, both low numbers were (presumably unconsciously) chosen to make the ‘lives saved per dollar’ look like a reasonable number in human terms, when in fact the x-risk calculus is such that all utilons should be measured in Probability of OK Outcome because the value of future galaxies stomps everything else.
Attempts to argue for large probabilities that FAI is important, and then tiny probabilities that MIRI is instrumental in creating FAI, will also strike me as a wrongheaded attempt at modesty. On a very large scale, if you think FAI stands a serious chance of saving the world, then humanity should dump a bunch of effort into it, and if nobody’s dumping effort into it then you should dump more effort than currently into it. Calculations of marginal impact in POKO/dollar are sensible for comparing two x-risk mitigation efforts in demand of money, but in this case each marginal added dollar is rightly going to account for a very tiny slice of probability, and this is not Pascal’s Wager. Large efforts with a success-or-failure criterion are rightly, justly, and unavoidably going to end up with small marginal probabilities per added unit effort. It would only be Pascal’s Wager if the whole route-to-humanity-being-OK were assigned a tiny probability, and then a large payoff used to shut down further discussion of whether the next unit of effort should go there or to a different x-risk.
(Scans video.)
This video is primarily about value of information estimates.
“Principle 2: Don’t trust your estimates too much. Estimates in, estimates out.” Good.
Application to the Singularity… It’s explicitly stated that the value is 7 billion lives plus all future generations, which is better—a lower bound is being set, not an estimated exact value.
Final calculation shown:
Probability of eventual AI: 80%
Probability AI with no safeguards will kill us: 80%
(Both of these numbers strike me as a bit suspicious in their apparent medianness which is something that often happens when an argument is unconsciously optimized for sounding reasonable. Really, the probability that AI happens at all, ever, is 80%? Isn’t that a bit low? Is this supposed to be factoring in the probability of nanotechnological warfare wiping out humanity before then, or something? Certainly, AI being possible in principle should have a much more extreme probability than 80%. And a 20% probability of an unsafed AI not killing you sounds like quite an amazing bonanza to get for free. But carrying on...)
Probability we manage safeguards: 40%
(No comment.)
Probability current work is why we manage: 30%
(Arguably too low. Even if MIRI crashes and somebody else carries on successfully, I’d estimate a pretty high probability that their causal pathway there will have had something to do with MIRI. It is difficult to overstate just how much this problem was not on the horizon, at all, of work anyone could actually go out and do twenty years ago.)
Net probabilty: 7%.
This is not necessarily a result I’d agree with, but it’s not a case of Pascal’s Wager on its face. 7% probabilities of large payoffs are a reasonable cause of positive action in sane people; it’s why you would do an Internet startup.
(continues scanning video)
I do not see any slide showing a probability of 1 in 2000. Was this spoken aloud? At what time in the episode?
It doesn’t merely have to have something to do with MIRI, it must be the case that without funding MIRI we all die, and with funding MIRI, we don’t, and this is precisely the sort of thing that should have very low probability if MIRI is not demonstrably impressive at doing something else.
Hmm. It is mentioned here and other commenters there likewise talk of low probabilities. I guess I just couldn’t quite imagine someone seriously putting a non small probability on “with MIRI we live, without we die” aspect of it. Startups have quite small probability of success, even without attempting to do the impossible.
edit: And of course what actually matters is donor’s probability.
For this to work out to 7%, a donor would need 30% probability that their choice of the organization to donate to is such that with this organization we live, and without, we die.
What donor can be so confident in their choice? Is Thiel this confident? Of course not, he only puts in a small fraction of his income, and he puts more into something like this. By the way I am rather curious about your opinion on this project.