In discussions here (ETA: meaning, in the Less Wrong community), I mostly take it for granted that people have adopted the Bayesian perspective promoted in Eliezer’s sequences. I think that one can make a pretty good argument (although mathematical rigour is too much to ask for) that receiving information through one’s senses can never be enough to justify absolute certainty about anything external. But I’d rather not try to make it here (ETA: meaning, in this discussion thread).
Is there a rigorous argument for this, or is this just a very powerful way of modeling the world?
In discussions here (ETA: meaning, in the Less Wrong community), I mostly take it for granted that people have adopted the Bayesian perspective promoted in Eliezer’s sequences. I think that one can make a pretty good argument (although mathematical rigour is too much to ask for) that receiving information through one’s senses can never be enough to justify absolute certainty about anything external. But I’d rather not try to make it here (ETA: meaning, in this discussion thread).
It’s more that Bayesian Analysis is a technique you can apply on anything, and under certain conditions is useful.