...it would be really nice if someone had bothered to actually check statistics on how many car failures were actually due to each of the possible causes.
Is subadditivity a one-way ratchet such that we can reliably infer that people are wrong to be more optimistic about cryonics after seeing fewer failure steps?
This sounds wrong to me. In full generality, I expect breaking things into smaller and smaller categories to yield larger and larger probability estimates for the supercategory. We don’t know what level of granularity would’ve led mechanics to be accurate, and furthermore, the main way to produce accuracy would’ve been to divide things into numbers of categories proportional to their actual probability so that all leaves of the tree had roughly equal weight. Your question sounds like breaking things down more always produces better estimates, and that is not the lesson of this study.
If I was trying to use this effect for a Grey Arts explanation (conveying a better image of what I honestly believe to be reality, without any false statements or omissions, but using explanatory techniques that a Dark Arts practitioner could manipulate to make people believe something else instead, e.g., writing a story as a way of conveying an idea) I would try to diagram cryonics possibilities into a tree where I believed the branches of a given level and the leaf nodes all had roughly equal probability, and just showing the tree would recruit the equal-leaf-size effect to cause the audience to concretely represent this probability estimate.
This sounds wrong to me. In full generality, I expect breaking things into smaller and smaller categories to yield larger and larger probability estimates for the supercategory. We don’t know what level of granularity would’ve led mechanics to be accurate, and furthermore, the main way to produce accuracy would’ve been to divide things into numbers of categories proportional to their actual probability so that all leaves of the tree had roughly equal weight. Your question sounds like breaking things down more always produces better estimates, and that is not the lesson of this study.
My suspicion is that conjunctive and disjunctive breakdowns exhibit different behavior which can be manipulated to increase or decrease a naive probability estimate:
in a conjunctive case, such as cryonics, the more finely the necessary steps are broken down, the lower you can manipulate a naive estimate.
To some extent this is appropriate since people are usually overconfident, but I suspect at some granularity, the conjunctions start getting unfairly negative: imagine if people were unwilling to give any step >99% odds, then you can break down a process into a hundred fine steps and their elicited probability must be <0.99^100 or <0.37.
in a disjunctive case, we can run it in reverse and instead manipulate upwards a probability estimate by enumerating every possible route
Like before, this can be appropriate to counter salience biases and really be comprehensive, but it too can be tendentious when it’s throwing a laundry list at people. Like before, if people refuse to assign, say, <1% odds to any particular disjunct, then for 100 independent disjuncts, you’re going to elicit a high naive probability (>63%*).
Finally, since you frame a problem as p or p-1, if you follow me, you can generally force your preferred choice.
With cryonics, you can take the hostile conjunctive approach: “in order for cryonics to work, you must sign up and the cryonics society must not fail and there must not be hyperinflation rendering your life insurance policy worthless and your family must not stall the procedure and the procedure must go well and Ben Best must decide not to experiment on your particular procedure and...” Or you can take the friendly disjunctive approach: “in order for cryonics to fail, all these strategies must fail: your neuronal weights be unrecoverable by an atom-by-atom readout, unrecoverable by inference from local cell structures, unrecoverable by global inferences, unrecoverable from a lifetime of output, unrecoverable by...”
* not sure about this one. I know the generalized sum rule but not how to apply it to 100 0.01 disjuncts; a Haskell fold give foldr (\a b -> a + b - a*b) 0.01 (replicate 99 0.01) ~> 0.63396.
in a conjunctive case, such as cryonics, the more finely the necessary steps are broken down, the lower you can manipulate a naive estimate.
Except that people intuitively average these sorts of links, so hostile manipulation involves negating the conjunction and then turning it into a disjunction—please, dear reader, assign a probability to not-A, and not-B, and not-C—oh, look, the probability of A and B and C seems quite low now! If you were describing an actual conjunction, a Dark Arts practioner would manipulate it in favor of cryonics by zooming in and dwelling on links of great strength. To hostilely drive down the intuitive probability of a conjunction, you have to break it down into lots and lots of possible failure modes—which is of course the strategy practiced by people who prefer to drive down the probability of cryonics. (Motivation is shown by their failure to cover any disjunctive success modes.)
not sure about this one. I know the generalized sum rule but not how to apply it to 100 0.01 disjuncts
This is just the complement of the previous probability you computed: 1-0.99^100, which is indeed approximately 0.632. Rather than compute this directly, you might observe that (1-1/n)^n converges very quickly to 1/e or approximately 0.368.
...it would be really nice if someone had bothered to actually check statistics on how many car failures were actually due to each of the possible causes.
This sounds wrong to me. In full generality, I expect breaking things into smaller and smaller categories to yield larger and larger probability estimates for the supercategory. We don’t know what level of granularity would’ve led mechanics to be accurate, and furthermore, the main way to produce accuracy would’ve been to divide things into numbers of categories proportional to their actual probability so that all leaves of the tree had roughly equal weight. Your question sounds like breaking things down more always produces better estimates, and that is not the lesson of this study.
If I was trying to use this effect for a Grey Arts explanation (conveying a better image of what I honestly believe to be reality, without any false statements or omissions, but using explanatory techniques that a Dark Arts practitioner could manipulate to make people believe something else instead, e.g., writing a story as a way of conveying an idea) I would try to diagram cryonics possibilities into a tree where I believed the branches of a given level and the leaf nodes all had roughly equal probability, and just showing the tree would recruit the equal-leaf-size effect to cause the audience to concretely represent this probability estimate.
My suspicion is that conjunctive and disjunctive breakdowns exhibit different behavior which can be manipulated to increase or decrease a naive probability estimate:
in a conjunctive case, such as cryonics, the more finely the necessary steps are broken down, the lower you can manipulate a naive estimate.
To some extent this is appropriate since people are usually overconfident, but I suspect at some granularity, the conjunctions start getting unfairly negative: imagine if people were unwilling to give any step >99% odds, then you can break down a process into a hundred fine steps and their elicited probability must be <0.99^100 or <0.37.
in a disjunctive case, we can run it in reverse and instead manipulate upwards a probability estimate by enumerating every possible route
Like before, this can be appropriate to counter salience biases and really be comprehensive, but it too can be tendentious when it’s throwing a laundry list at people. Like before, if people refuse to assign, say, <1% odds to any particular disjunct, then for 100 independent disjuncts, you’re going to elicit a high naive probability (>63%*).
Finally, since you frame a problem as p or p-1, if you follow me, you can generally force your preferred choice.
With cryonics, you can take the hostile conjunctive approach: “in order for cryonics to work, you must sign up and the cryonics society must not fail and there must not be hyperinflation rendering your life insurance policy worthless and your family must not stall the procedure and the procedure must go well and Ben Best must decide not to experiment on your particular procedure and...” Or you can take the friendly disjunctive approach: “in order for cryonics to fail, all these strategies must fail: your neuronal weights be unrecoverable by an atom-by-atom readout, unrecoverable by inference from local cell structures, unrecoverable by global inferences, unrecoverable from a lifetime of output, unrecoverable by...”
* not sure about this one. I know the generalized sum rule but not how to apply it to 100 0.01 disjuncts; a Haskell fold give
foldr (\a b -> a + b - a*b) 0.01 (replicate 99 0.01)
~> 0.63396.Except that people intuitively average these sorts of links, so hostile manipulation involves negating the conjunction and then turning it into a disjunction—please, dear reader, assign a probability to not-A, and not-B, and not-C—oh, look, the probability of A and B and C seems quite low now! If you were describing an actual conjunction, a Dark Arts practioner would manipulate it in favor of cryonics by zooming in and dwelling on links of great strength. To hostilely drive down the intuitive probability of a conjunction, you have to break it down into lots and lots of possible failure modes—which is of course the strategy practiced by people who prefer to drive down the probability of cryonics. (Motivation is shown by their failure to cover any disjunctive success modes.)
This is just the complement of the previous probability you computed: 1-0.99^100, which is indeed approximately 0.632. Rather than compute this directly, you might observe that (1-1/n)^n converges very quickly to 1/e or approximately 0.368.
Yeah, nsheppard pointed that out to me after I wrote the fold. Oh well! I’ll know better next time.