I don’t think it can discriminate between the possible impossible and the impossible impossible. It just throws up a uniform fog of “The outside view says it is nonvirtuous to try to distinguish within this reference class.”
This seems to be usually accounted for by value of information, you should do some unproven things primarily in order to figure out if something like that is possible (or why not, in more detail), before you know it to be possible. If something does turn out to be possible, you just keep on doing it, so that the primary motivation changes without the activity itself changing.
(One characteristic of doing something for its value of information as opposed to its expected utility seems to be the expectation of having to drop it when it’s not working out. If something has high expected utility a priori, continuing to do it despite it not working won’t be as damaging (a priori), even though there is no reason to act this way.)
continuing to do it despite it not working won’t be as damaging (a priori)
Not sure I understood this—are you saying that the expected damage caused by continuing to do it despite it not working is less just because the probability that it won’t work is less?
This seems to be usually accounted for by value of information, you should do some unproven things primarily in order to figure out if something like that is possible (or why not, in more detail), before you know it to be possible. If something does turn out to be possible, you just keep on doing it, so that the primary motivation changes without the activity itself changing.
(One characteristic of doing something for its value of information as opposed to its expected utility seems to be the expectation of having to drop it when it’s not working out. If something has high expected utility a priori, continuing to do it despite it not working won’t be as damaging (a priori), even though there is no reason to act this way.)
Not sure I understood this—are you saying that the expected damage caused by continuing to do it despite it not working is less just because the probability that it won’t work is less?