How about accepting that some things are neither, but you still have to make a choice? (E.g. inevitability of (u)FAI is untestable, and relies on a number of disputed assumptions and extrapolations. Same with the viability of cryonics.) How do you construct your priors to make a decision you can live with, and how do you deal with the situation where, despite your best priors, you end up being proven wrong?
Now, this is a much better question! And yes, I am thinking a lot on these. But, in some sense this kind of thing bothers me much less: because it is so clear that the issue is unclear, my mind doesn’t try to unconditionally commit it to the belief pool just because I read something exciting about it. And then I know I have to think about it, and look for independent sources etc. (For these two specific problems, I am in a different state of confusion. Cryonics: quite confused; AGI: a bit better, at least I know what my next steps are.)
How about accepting that some things are neither, but you still have to make a choice? (E.g. inevitability of (u)FAI is untestable, and relies on a number of disputed assumptions and extrapolations. Same with the viability of cryonics.) How do you construct your priors to make a decision you can live with, and how do you deal with the situation where, despite your best priors, you end up being proven wrong?
Now, this is a much better question! And yes, I am thinking a lot on these. But, in some sense this kind of thing bothers me much less: because it is so clear that the issue is unclear, my mind doesn’t try to unconditionally commit it to the belief pool just because I read something exciting about it. And then I know I have to think about it, and look for independent sources etc. (For these two specific problems, I am in a different state of confusion. Cryonics: quite confused; AGI: a bit better, at least I know what my next steps are.)
How do you deal with this?