There’s another effect of “unpacking”, which is that it gets us around the conjunction/planning fallacy. Minimally, I would think that unpacking both the paths to failure and the paths to success is better than unpacking neither.
I think it’s also good to mention that this kind of questionnaire does not account for possible future advancements which are not included due to lack of availability. The same though applies for further negative changes in the future, but when looking at that list for an example items follows are completely missing:
Legislation for improving the safety and conditions of cryopreserved people is passed
Neuroscientists develop new general techniques for restoring function in patients with braindamage
Breakthrough in nanotechnology allows better analysis and faster repair of damaged neurons
Supercomputers can be used to retrace the original condition of modified or damaged brain
Supercomputers (with the help of FAI?) can be used to reconstruct missing data from redundancy(like mentioned above in Benja’s comment )
etc..
..That is to say it’s one thing to ‘unpack’ a proposition and another to do it accurately or at least I would think a questionnaire with uncertain positive and negative future events would seem less biased.
I think it’s also worthwhile to consider the possibility that this unpacking business is an sort of an inverse of conjunction fallacy—although it’s not exactly the same thing, but I think it’s a very closely related topic?
There’s another effect of “unpacking”, which is that it gets us around the conjunction/planning fallacy. Minimally, I would think that unpacking both the paths to failure and the paths to success is better than unpacking neither.
I wonder if that would actually work, or if the finer granularity basically just trashes the ability of your brain to estimate probabilities.
I think it’s also good to mention that this kind of questionnaire does not account for possible future advancements which are not included due to lack of availability. The same though applies for further negative changes in the future, but when looking at that list for an example items follows are completely missing:
Legislation for improving the safety and conditions of cryopreserved people is passed
Neuroscientists develop new general techniques for restoring function in patients with braindamage
Breakthrough in nanotechnology allows better analysis and faster repair of damaged neurons
Supercomputers can be used to retrace the original condition of modified or damaged brain
Supercomputers (with the help of FAI?) can be used to reconstruct missing data from redundancy(like mentioned above in Benja’s comment )
etc..
..That is to say it’s one thing to ‘unpack’ a proposition and another to do it accurately or at least I would think a questionnaire with uncertain positive and negative future events would seem less biased.
I think it’s also worthwhile to consider the possibility that this unpacking business is an sort of an inverse of conjunction fallacy—although it’s not exactly the same thing, but I think it’s a very closely related topic?