If you only assign significant probability mass to one changeover day, you behave inductively on almost all the days up to that point, and hence make relatively few epistemic errors.
But even one epistemic error is enough to cause an arbitrarily large loss in utility. Suppose you think that with 99% probability, unless you personally join a monastery and stop having any contact with the outside world, God will put everyone who ever existed into hell on 1/1/2050. So you do that instead of working on making a positive Singularity happen. Since you can’t update away this belief until it’s too late, it does seem important to have “reasonable” priors instead of just a non-superexponentially-tiny probability to “induction works”.
But even one epistemic error is enough to cause an arbitrarily large loss in utility.
This is always true.
Since you can’t update away this belief until it’s too late, it does seem important to have “reasonable” priors instead of just a non-superexponentially-tiny probability to “induction works”.
I’d say more that besides your one reasonable prior you also need to not make various sorts of specifically harmful mistakes, but this only becomes true when instrumental welfare as well as epistemic welfare are being taken into account. :)
Do you think it’s useful to consider “epistemic welfare” independently of “instrumental welfare”? To me it seems that approach has led to a number of problems in the past.
Solomonoff Induction was historically justified a way similar to your post: you should use the universal prior, because whatever the “right” prior is, if it’s computable then substituting the universal prior will cost you only a limited number of epistemic errors. I think this sort of argument is more impressive/persuasive than it should be (at least for some people, including myself when I first came across it), and makes them erroneously think the problem of finding “the right prior” or “a reasonable prior” is already solved or doesn’t need to be solved.
Thinking that anthropic reasoning / indexical uncertainty is clearly an epistemic problem and hence ought to be solved within epistemology (rather than decision theory), leading for example to dozens of papers arguing over what is the right way to do Bayesian updating in the Sleeping Beauty problem.
But even one epistemic error is enough to cause an arbitrarily large loss in utility. Suppose you think that with 99% probability, unless you personally join a monastery and stop having any contact with the outside world, God will put everyone who ever existed into hell on 1/1/2050. So you do that instead of working on making a positive Singularity happen. Since you can’t update away this belief until it’s too late, it does seem important to have “reasonable” priors instead of just a non-superexponentially-tiny probability to “induction works”.
This is always true.
I’d say more that besides your one reasonable prior you also need to not make various sorts of specifically harmful mistakes, but this only becomes true when instrumental welfare as well as epistemic welfare are being taken into account. :)
Do you think it’s useful to consider “epistemic welfare” independently of “instrumental welfare”? To me it seems that approach has led to a number of problems in the past.
Solomonoff Induction was historically justified a way similar to your post: you should use the universal prior, because whatever the “right” prior is, if it’s computable then substituting the universal prior will cost you only a limited number of epistemic errors. I think this sort of argument is more impressive/persuasive than it should be (at least for some people, including myself when I first came across it), and makes them erroneously think the problem of finding “the right prior” or “a reasonable prior” is already solved or doesn’t need to be solved.
Thinking that anthropic reasoning / indexical uncertainty is clearly an epistemic problem and hence ought to be solved within epistemology (rather than decision theory), leading for example to dozens of papers arguing over what is the right way to do Bayesian updating in the Sleeping Beauty problem.