People are not “ideal agents”. If you specifically construct your formalization to fit your ideas of what an ideal agent should and should not be able to do, this formalization will be a poor fit to actual, live human beings.
So either you make a system for ideal agents—in which case you’ll still run into some problems because, as has been pointed out upthread, standard probability math stops working if you disallow zeros and ones—or you make a system which is applicable to our imperfect world with imperfect humans.
I don’t see why both aren’t useful. If you want a descriptive model instead of a normative one, try prospect theory.
I just don’t see this article as an axiom that says probabilities of 0 and 1 aren’t allowed in probability theory. I see it as a warning not to put 0s and 1s in your AI’s prior. You’re not changing the math so much as picking good priors.
People are not “ideal agents”. If you specifically construct your formalization to fit your ideas of what an ideal agent should and should not be able to do, this formalization will be a poor fit to actual, live human beings.
So either you make a system for ideal agents—in which case you’ll still run into some problems because, as has been pointed out upthread, standard probability math stops working if you disallow zeros and ones—or you make a system which is applicable to our imperfect world with imperfect humans.
I don’t see why both aren’t useful. If you want a descriptive model instead of a normative one, try prospect theory.
I just don’t see this article as an axiom that says probabilities of 0 and 1 aren’t allowed in probability theory. I see it as a warning not to put 0s and 1s in your AI’s prior. You’re not changing the math so much as picking good priors.