This sounds wrong to me. In full generality, I expect breaking things into smaller and smaller categories to yield larger and larger probability estimates for the supercategory. We don’t know what level of granularity would’ve led mechanics to be accurate, and furthermore, the main way to produce accuracy would’ve been to divide things into numbers of categories proportional to their actual probability so that all leaves of the tree had roughly equal weight. Your question sounds like breaking things down more always produces better estimates, and that is not the lesson of this study.
My suspicion is that conjunctive and disjunctive breakdowns exhibit different behavior which can be manipulated to increase or decrease a naive probability estimate:
in a conjunctive case, such as cryonics, the more finely the necessary steps are broken down, the lower you can manipulate a naive estimate.
To some extent this is appropriate since people are usually overconfident, but I suspect at some granularity, the conjunctions start getting unfairly negative: imagine if people were unwilling to give any step >99% odds, then you can break down a process into a hundred fine steps and their elicited probability must be <0.99^100 or <0.37.
in a disjunctive case, we can run it in reverse and instead manipulate upwards a probability estimate by enumerating every possible route
Like before, this can be appropriate to counter salience biases and really be comprehensive, but it too can be tendentious when it’s throwing a laundry list at people. Like before, if people refuse to assign, say, <1% odds to any particular disjunct, then for 100 independent disjuncts, you’re going to elicit a high naive probability (>63%*).
Finally, since you frame a problem as p or p-1, if you follow me, you can generally force your preferred choice.
With cryonics, you can take the hostile conjunctive approach: “in order for cryonics to work, you must sign up and the cryonics society must not fail and there must not be hyperinflation rendering your life insurance policy worthless and your family must not stall the procedure and the procedure must go well and Ben Best must decide not to experiment on your particular procedure and...” Or you can take the friendly disjunctive approach: “in order for cryonics to fail, all these strategies must fail: your neuronal weights be unrecoverable by an atom-by-atom readout, unrecoverable by inference from local cell structures, unrecoverable by global inferences, unrecoverable from a lifetime of output, unrecoverable by...”
* not sure about this one. I know the generalized sum rule but not how to apply it to 100 0.01 disjuncts; a Haskell fold give foldr (\a b -> a + b - a*b) 0.01 (replicate 99 0.01) ~> 0.63396.
in a conjunctive case, such as cryonics, the more finely the necessary steps are broken down, the lower you can manipulate a naive estimate.
Except that people intuitively average these sorts of links, so hostile manipulation involves negating the conjunction and then turning it into a disjunction—please, dear reader, assign a probability to not-A, and not-B, and not-C—oh, look, the probability of A and B and C seems quite low now! If you were describing an actual conjunction, a Dark Arts practioner would manipulate it in favor of cryonics by zooming in and dwelling on links of great strength. To hostilely drive down the intuitive probability of a conjunction, you have to break it down into lots and lots of possible failure modes—which is of course the strategy practiced by people who prefer to drive down the probability of cryonics. (Motivation is shown by their failure to cover any disjunctive success modes.)
not sure about this one. I know the generalized sum rule but not how to apply it to 100 0.01 disjuncts
This is just the complement of the previous probability you computed: 1-0.99^100, which is indeed approximately 0.632. Rather than compute this directly, you might observe that (1-1/n)^n converges very quickly to 1/e or approximately 0.368.
My suspicion is that conjunctive and disjunctive breakdowns exhibit different behavior which can be manipulated to increase or decrease a naive probability estimate:
in a conjunctive case, such as cryonics, the more finely the necessary steps are broken down, the lower you can manipulate a naive estimate.
To some extent this is appropriate since people are usually overconfident, but I suspect at some granularity, the conjunctions start getting unfairly negative: imagine if people were unwilling to give any step >99% odds, then you can break down a process into a hundred fine steps and their elicited probability must be <0.99^100 or <0.37.
in a disjunctive case, we can run it in reverse and instead manipulate upwards a probability estimate by enumerating every possible route
Like before, this can be appropriate to counter salience biases and really be comprehensive, but it too can be tendentious when it’s throwing a laundry list at people. Like before, if people refuse to assign, say, <1% odds to any particular disjunct, then for 100 independent disjuncts, you’re going to elicit a high naive probability (>63%*).
Finally, since you frame a problem as p or p-1, if you follow me, you can generally force your preferred choice.
With cryonics, you can take the hostile conjunctive approach: “in order for cryonics to work, you must sign up and the cryonics society must not fail and there must not be hyperinflation rendering your life insurance policy worthless and your family must not stall the procedure and the procedure must go well and Ben Best must decide not to experiment on your particular procedure and...” Or you can take the friendly disjunctive approach: “in order for cryonics to fail, all these strategies must fail: your neuronal weights be unrecoverable by an atom-by-atom readout, unrecoverable by inference from local cell structures, unrecoverable by global inferences, unrecoverable from a lifetime of output, unrecoverable by...”
* not sure about this one. I know the generalized sum rule but not how to apply it to 100 0.01 disjuncts; a Haskell fold give
foldr (\a b -> a + b - a*b) 0.01 (replicate 99 0.01)
~> 0.63396.Except that people intuitively average these sorts of links, so hostile manipulation involves negating the conjunction and then turning it into a disjunction—please, dear reader, assign a probability to not-A, and not-B, and not-C—oh, look, the probability of A and B and C seems quite low now! If you were describing an actual conjunction, a Dark Arts practioner would manipulate it in favor of cryonics by zooming in and dwelling on links of great strength. To hostilely drive down the intuitive probability of a conjunction, you have to break it down into lots and lots of possible failure modes—which is of course the strategy practiced by people who prefer to drive down the probability of cryonics. (Motivation is shown by their failure to cover any disjunctive success modes.)
This is just the complement of the previous probability you computed: 1-0.99^100, which is indeed approximately 0.632. Rather than compute this directly, you might observe that (1-1/n)^n converges very quickly to 1/e or approximately 0.368.
Yeah, nsheppard pointed that out to me after I wrote the fold. Oh well! I’ll know better next time.