My guess is that he doesn’t mean it for all things: if you can buy a 1 dollar lottery ticket that has a one in ten thousand chance of winning a million dollars, you shouldn’t discount it because of the low probability.
But for pascal-wager type things, we’re typically estimating the probabilities instead of being able to calculate them. He seems to be using 1/1000 as the cutoff for where human estimates of probability stop being accurate enough to base decisions on. This doesn’t seem like a bad cutoff point to me, but even being charitable and extending it to 1/10000 would still probably disqualify any sort of pascal-wager type argument.
He seems to be using 1/1000 as the cutoff for where human estimates of probability stop being accurate enough to base decisions on.
I doubt this is what Roko means. Probabilities are “in the mind”; they’re our best subjective estimates of what will happen, given our incomplete knowledge and calculating abilities. In some sense it doesn’t make sense to talk about our best-guess probabilities being (externally) “accurate” or “inaccurate”. We can just make the best estimates we can make.
What can it mean for probabilities to “not be accurate enough to base decisions on”? We have to decide, one way or another, with the best probabilities we can build or with some other decision procedure. Is zero an accurate enough probability (of cryonics success, or of a given Pascal’s wager-like situation) to base decisions on, if an estimated 1 in ten thousand or whatever is not?
IAWYC (I think my original statement is wrong), but I disagree on there being no difference in ‘accurate’ and ‘inaccurate’ probabilities.
In my mind there’s a big difference between a probability where you have one step between the data and your probability(such as a lottery or a coin flip), and a case where you have multiple, fuzzy inferential steps (such as an estimation of the longevity of the human race). The more you have to extrapolate out and fill in the gaps where you don’t have data, the more room there is for error to creep in.
For things in the realm of ‘things that will happen in the far future’, it’s not clear to me that a probability you assign to something will be anything but speculation, and as such I’d assign any probability (no matter what it is) for that type of event a rather low accuracy.
This raises the question of whether it’s worth it at all to assign probabilities to these kinds of events where there are too many unknown (and unknown unknown) factors influencing them. (and if I’m terribly misunderstanding something, please let me know.)
When dealing with health and safety decisions, people often need to deal with one-in-a-million types of risks.
In nuclear safety, I hear, they use a measure called “nanomelts” or a one-in-a-billion risk of a meltdown. They then can rank risks based on cost-to-fix per nanomelt, for example.
In both of these, though, it might be more based on data and then scaled down to different timescales (e.g. if there were 250 deaths per year in the US from car accidents = about 1 in a million per day risk of death from driving; use statistical techniques to adjust this number for age, drunkenness, etc.)
My guess is that he doesn’t mean it for all things: if you can buy a 1 dollar lottery ticket that has a one in ten thousand chance of winning a million dollars, you shouldn’t discount it because of the low probability.
But for pascal-wager type things, we’re typically estimating the probabilities instead of being able to calculate them. He seems to be using 1/1000 as the cutoff for where human estimates of probability stop being accurate enough to base decisions on. This doesn’t seem like a bad cutoff point to me, but even being charitable and extending it to 1/10000 would still probably disqualify any sort of pascal-wager type argument.
I doubt this is what Roko means. Probabilities are “in the mind”; they’re our best subjective estimates of what will happen, given our incomplete knowledge and calculating abilities. In some sense it doesn’t make sense to talk about our best-guess probabilities being (externally) “accurate” or “inaccurate”. We can just make the best estimates we can make.
What can it mean for probabilities to “not be accurate enough to base decisions on”? We have to decide, one way or another, with the best probabilities we can build or with some other decision procedure. Is zero an accurate enough probability (of cryonics success, or of a given Pascal’s wager-like situation) to base decisions on, if an estimated 1 in ten thousand or whatever is not?
IAWYC (I think my original statement is wrong), but I disagree on there being no difference in ‘accurate’ and ‘inaccurate’ probabilities.
In my mind there’s a big difference between a probability where you have one step between the data and your probability(such as a lottery or a coin flip), and a case where you have multiple, fuzzy inferential steps (such as an estimation of the longevity of the human race). The more you have to extrapolate out and fill in the gaps where you don’t have data, the more room there is for error to creep in.
For things in the realm of ‘things that will happen in the far future’, it’s not clear to me that a probability you assign to something will be anything but speculation, and as such I’d assign any probability (no matter what it is) for that type of event a rather low accuracy.
This raises the question of whether it’s worth it at all to assign probabilities to these kinds of events where there are too many unknown (and unknown unknown) factors influencing them. (and if I’m terribly misunderstanding something, please let me know.)
When dealing with health and safety decisions, people often need to deal with one-in-a-million types of risks.
In nuclear safety, I hear, they use a measure called “nanomelts” or a one-in-a-billion risk of a meltdown. They then can rank risks based on cost-to-fix per nanomelt, for example.
In both of these, though, it might be more based on data and then scaled down to different timescales (e.g. if there were 250 deaths per year in the US from car accidents = about 1 in a million per day risk of death from driving; use statistical techniques to adjust this number for age, drunkenness, etc.)