No, it’s not too dark, it is useful to see an even stronger expression of caution. But, it misses the point a bit. It’s not very helpful to know that Eliezer is probably wrong on some things. Neither is finding a mistake here or there. It just doesn’t help.
You see, my goal is to accept and learn fully that which is accurate, and reject (and maybe fix and improve) that which is wrong. Neither one is enough by itself.
How about accepting that some things are neither, but you still have to make a choice? (E.g. inevitability of (u)FAI is untestable, and relies on a number of disputed assumptions and extrapolations. Same with the viability of cryonics.) How do you construct your priors to make a decision you can live with, and how do you deal with the situation where, despite your best priors, you end up being proven wrong?
Now, this is a much better question! And yes, I am thinking a lot on these. But, in some sense this kind of thing bothers me much less: because it is so clear that the issue is unclear, my mind doesn’t try to unconditionally commit it to the belief pool just because I read something exciting about it. And then I know I have to think about it, and look for independent sources etc. (For these two specific problems, I am in a different state of confusion. Cryonics: quite confused; AGI: a bit better, at least I know what my next steps are.)
No, it’s not too dark, it is useful to see an even stronger expression of caution. But, it misses the point a bit. It’s not very helpful to know that Eliezer is probably wrong on some things. Neither is finding a mistake here or there. It just doesn’t help.
You see, my goal is to accept and learn fully that which is accurate, and reject (and maybe fix and improve) that which is wrong. Neither one is enough by itself.
How about accepting that some things are neither, but you still have to make a choice? (E.g. inevitability of (u)FAI is untestable, and relies on a number of disputed assumptions and extrapolations. Same with the viability of cryonics.) How do you construct your priors to make a decision you can live with, and how do you deal with the situation where, despite your best priors, you end up being proven wrong?
Now, this is a much better question! And yes, I am thinking a lot on these. But, in some sense this kind of thing bothers me much less: because it is so clear that the issue is unclear, my mind doesn’t try to unconditionally commit it to the belief pool just because I read something exciting about it. And then I know I have to think about it, and look for independent sources etc. (For these two specific problems, I am in a different state of confusion. Cryonics: quite confused; AGI: a bit better, at least I know what my next steps are.)
How do you deal with this?