Does anybody have thoughts on Roko’s suggestion that we don’t/shouldn’t really value probabilities less than 1 in four dice-rolls, and that this is a reason we aren’t/shouldn’t be compelled by Pascal’s wager (and any more probable but still under 1 in four dice-rolls wagers)?
I’m confused about this and would love to hear more peoples’ thoughts.
If there’s a pool of unknown things that are infinitely important, and what they are correlates positively with what would be important otherwise, then that gives you a lower bound on the probability of scenarios that you should take seriously no matter how high their utility. I’m not sure that it’s a very high lower bound though.
There’s also a class of things that we can’t really decide rationally because they’re far more improbable than our understanding of decision theory being completely wrong and/or because if they’re true then everything we know is wrong including decision theory.
If there’s a pool of unknown things that are infinitely important, and what they are correlates positively with what would be important otherwise, then that gives you a lower bound on the probability of scenarios that you should take seriously no matter how high their utility. I’m not sure that it’s a very high lower bound though.
It sounds like there may be a great point in here. I can’t quite see what it is or whether it works, though. Could you maybe spell it out with some variables or math?
There’s also a class of things that we can’t really decide rationally because they’re far more improbable than our understanding of decision theory being completely wrong.
If we use “decide rationally” to mean “decide in the way that makes most sense, given our limited knowledge and understanding” rather than “follow a particular procedure with a certain sort of justification”, I don’t think this is true. We should just be able to stick a probability on our understanding of decision theory being right, estimate conditional probabilities for outcomes and preferences if our understanding is/isn’t right, etc. There wouldn’t be a definite known framework in which this was rigorous, but it should yield good best-guess probabilities for decision-making the way any other taking account of structural uncertainty does.
It sounds like there may be a great point in here. I can’t quite see what it is or whether it works, though. Could you maybe spell it out with some variables or math?
Suppose you have a prima facie utility function U on ordinary outcomes; and suppose that you estimate that due to unknown unknowns, the probability that your real utility function V is infinite for each ordinary outcome is 1/(10^100) * U(outcome). Then you should prefer eating a pie with U = 3 utils (versus say 1 util for not eating it) to an 1 in 10^200 chance of going to heaven and getting infinite utils (which I’m counting here as an extraordinary outcome that the relationship between U and V doesn’t apply to).
If we use “decide rationally” to mean “decide in the way that makes most sense, given our limited knowledge and understanding” rather than “follow a particular procedure with a certain sort of justification”, I don’t think this is true.
I’m confused here, but I’m thinking of cases like: there’s a probability of 1 in 10^20 that God exists, but if so then our best guess is also that 1=2. If God exists, then the utility of an otherwise identical outcome is (1/1)^1000 times what it would otherwise be, so it’s also (1/2)^1000 times what it would otherwise be, so can we ignore that case?
I suspect reasoning like this would produce cutoffs far below Roko’s, though. (And the first argument above probably wouldn’t reproduce normal behavior.)
If A is (utility of) status quo, B is winning option and C is its counterpart, then the default lottery (not playing) is A, and our 1000-rare lottery is (B+1000*C)/1001, so preferring to pass the lottery corresponds to 1000*(A-C)>(B-A). That is, no benefit B over A is more than 1000 times loss C below A.
Or, formulating as a bound on utility, even the small losses significant enough to think about them weight more than 1/1000th of the greatest possible prize. It looks like a reasonable enough heuristic for the choices of everyday life: don’t get bogged down by seemingly small nuisances, they are actually bad enough to invest effort in systematically avoiding them.
My guess is that he doesn’t mean it for all things: if you can buy a 1 dollar lottery ticket that has a one in ten thousand chance of winning a million dollars, you shouldn’t discount it because of the low probability.
But for pascal-wager type things, we’re typically estimating the probabilities instead of being able to calculate them. He seems to be using 1/1000 as the cutoff for where human estimates of probability stop being accurate enough to base decisions on. This doesn’t seem like a bad cutoff point to me, but even being charitable and extending it to 1/10000 would still probably disqualify any sort of pascal-wager type argument.
He seems to be using 1/1000 as the cutoff for where human estimates of probability stop being accurate enough to base decisions on.
I doubt this is what Roko means. Probabilities are “in the mind”; they’re our best subjective estimates of what will happen, given our incomplete knowledge and calculating abilities. In some sense it doesn’t make sense to talk about our best-guess probabilities being (externally) “accurate” or “inaccurate”. We can just make the best estimates we can make.
What can it mean for probabilities to “not be accurate enough to base decisions on”? We have to decide, one way or another, with the best probabilities we can build or with some other decision procedure. Is zero an accurate enough probability (of cryonics success, or of a given Pascal’s wager-like situation) to base decisions on, if an estimated 1 in ten thousand or whatever is not?
IAWYC (I think my original statement is wrong), but I disagree on there being no difference in ‘accurate’ and ‘inaccurate’ probabilities.
In my mind there’s a big difference between a probability where you have one step between the data and your probability(such as a lottery or a coin flip), and a case where you have multiple, fuzzy inferential steps (such as an estimation of the longevity of the human race). The more you have to extrapolate out and fill in the gaps where you don’t have data, the more room there is for error to creep in.
For things in the realm of ‘things that will happen in the far future’, it’s not clear to me that a probability you assign to something will be anything but speculation, and as such I’d assign any probability (no matter what it is) for that type of event a rather low accuracy.
This raises the question of whether it’s worth it at all to assign probabilities to these kinds of events where there are too many unknown (and unknown unknown) factors influencing them. (and if I’m terribly misunderstanding something, please let me know.)
When dealing with health and safety decisions, people often need to deal with one-in-a-million types of risks.
In nuclear safety, I hear, they use a measure called “nanomelts” or a one-in-a-billion risk of a meltdown. They then can rank risks based on cost-to-fix per nanomelt, for example.
In both of these, though, it might be more based on data and then scaled down to different timescales (e.g. if there were 250 deaths per year in the US from car accidents = about 1 in a million per day risk of death from driving; use statistical techniques to adjust this number for age, drunkenness, etc.)
Does anybody have thoughts on Roko’s suggestion that we don’t/shouldn’t really value probabilities less than 1 in four dice-rolls, and that this is a reason we aren’t/shouldn’t be compelled by Pascal’s wager (and any more probable but still under 1 in four dice-rolls wagers)?
I’m confused about this and would love to hear more peoples’ thoughts.
If there’s a pool of unknown things that are infinitely important, and what they are correlates positively with what would be important otherwise, then that gives you a lower bound on the probability of scenarios that you should take seriously no matter how high their utility. I’m not sure that it’s a very high lower bound though.
There’s also a class of things that we can’t really decide rationally because they’re far more improbable than our understanding of decision theory being completely wrong and/or because if they’re true then everything we know is wrong including decision theory.
It sounds like there may be a great point in here. I can’t quite see what it is or whether it works, though. Could you maybe spell it out with some variables or math?
If we use “decide rationally” to mean “decide in the way that makes most sense, given our limited knowledge and understanding” rather than “follow a particular procedure with a certain sort of justification”, I don’t think this is true. We should just be able to stick a probability on our understanding of decision theory being right, estimate conditional probabilities for outcomes and preferences if our understanding is/isn’t right, etc. There wouldn’t be a definite known framework in which this was rigorous, but it should yield good best-guess probabilities for decision-making the way any other taking account of structural uncertainty does.
Suppose you have a prima facie utility function U on ordinary outcomes; and suppose that you estimate that due to unknown unknowns, the probability that your real utility function V is infinite for each ordinary outcome is 1/(10^100) * U(outcome). Then you should prefer eating a pie with U = 3 utils (versus say 1 util for not eating it) to an 1 in 10^200 chance of going to heaven and getting infinite utils (which I’m counting here as an extraordinary outcome that the relationship between U and V doesn’t apply to).
I’m confused here, but I’m thinking of cases like: there’s a probability of 1 in 10^20 that God exists, but if so then our best guess is also that 1=2. If God exists, then the utility of an otherwise identical outcome is (1/1)^1000 times what it would otherwise be, so it’s also (1/2)^1000 times what it would otherwise be, so can we ignore that case?
I suspect reasoning like this would produce cutoffs far below Roko’s, though. (And the first argument above probably wouldn’t reproduce normal behavior.)
If A is (utility of) status quo, B is winning option and C is its counterpart, then the default lottery (not playing) is A, and our 1000-rare lottery is (B+1000*C)/1001, so preferring to pass the lottery corresponds to 1000*(A-C)>(B-A). That is, no benefit B over A is more than 1000 times loss C below A.
Or, formulating as a bound on utility, even the small losses significant enough to think about them weight more than 1/1000th of the greatest possible prize. It looks like a reasonable enough heuristic for the choices of everyday life: don’t get bogged down by seemingly small nuisances, they are actually bad enough to invest effort in systematically avoiding them.
My guess is that he doesn’t mean it for all things: if you can buy a 1 dollar lottery ticket that has a one in ten thousand chance of winning a million dollars, you shouldn’t discount it because of the low probability.
But for pascal-wager type things, we’re typically estimating the probabilities instead of being able to calculate them. He seems to be using 1/1000 as the cutoff for where human estimates of probability stop being accurate enough to base decisions on. This doesn’t seem like a bad cutoff point to me, but even being charitable and extending it to 1/10000 would still probably disqualify any sort of pascal-wager type argument.
I doubt this is what Roko means. Probabilities are “in the mind”; they’re our best subjective estimates of what will happen, given our incomplete knowledge and calculating abilities. In some sense it doesn’t make sense to talk about our best-guess probabilities being (externally) “accurate” or “inaccurate”. We can just make the best estimates we can make.
What can it mean for probabilities to “not be accurate enough to base decisions on”? We have to decide, one way or another, with the best probabilities we can build or with some other decision procedure. Is zero an accurate enough probability (of cryonics success, or of a given Pascal’s wager-like situation) to base decisions on, if an estimated 1 in ten thousand or whatever is not?
IAWYC (I think my original statement is wrong), but I disagree on there being no difference in ‘accurate’ and ‘inaccurate’ probabilities.
In my mind there’s a big difference between a probability where you have one step between the data and your probability(such as a lottery or a coin flip), and a case where you have multiple, fuzzy inferential steps (such as an estimation of the longevity of the human race). The more you have to extrapolate out and fill in the gaps where you don’t have data, the more room there is for error to creep in.
For things in the realm of ‘things that will happen in the far future’, it’s not clear to me that a probability you assign to something will be anything but speculation, and as such I’d assign any probability (no matter what it is) for that type of event a rather low accuracy.
This raises the question of whether it’s worth it at all to assign probabilities to these kinds of events where there are too many unknown (and unknown unknown) factors influencing them. (and if I’m terribly misunderstanding something, please let me know.)
When dealing with health and safety decisions, people often need to deal with one-in-a-million types of risks.
In nuclear safety, I hear, they use a measure called “nanomelts” or a one-in-a-billion risk of a meltdown. They then can rank risks based on cost-to-fix per nanomelt, for example.
In both of these, though, it might be more based on data and then scaled down to different timescales (e.g. if there were 250 deaths per year in the US from car accidents = about 1 in a million per day risk of death from driving; use statistical techniques to adjust this number for age, drunkenness, etc.)