The claim of “people don’t actually have absolute certainty” looks iffy to me, anyway. The immediate two questions that come to mind are (1) How do you know? and (2) Not even a single human being?
The way I view that statement is: “In our formalization, agents with absolutely certain beliefs cannot change those beliefs, we want our formalization to capture our intuitive sense of how an ideal agent would update its beliefs, a formalization with a quality of fanaticism does not capture our intuitive sense of how an ideal agent would update its beliefs, therefore we do not want a quality of fanaticism.”
And what state of the world would correspond to the statement “Some people have absolute certainty.” ? Do you think that we can take some highly advanced and entirely fictional neuroimaging technology, look at a brain and meaningfully say, “There’s a belief with probability 1.” ?
And on the other hand, I’m not afraid to talk about folk certainty, where the properties of an ideal mathematical system are less relevant, where everyone can remain blissfully logically uncertain to the fact that beliefs with probability 1 and 0 imply undesirable consequences in formal systems that possess them, and say things like “I believe that absolutely.” I am not afraid to say something like, “That person will not stop believing that for as long as he lives,” and mean that I predict with high confidence that that person will not stop believing that for as long as he lives.
And once you believe that the formalization is trying to capture our intuitive sense of an ideal agent, and decide whether or not that quality of fanaticism captures it, and decide whether or not you’re going to be a stickler about folk language, then I don’t think that any question or confusion around that claim remains.
People are not “ideal agents”. If you specifically construct your formalization to fit your ideas of what an ideal agent should and should not be able to do, this formalization will be a poor fit to actual, live human beings.
So either you make a system for ideal agents—in which case you’ll still run into some problems because, as has been pointed out upthread, standard probability math stops working if you disallow zeros and ones—or you make a system which is applicable to our imperfect world with imperfect humans.
I don’t see why both aren’t useful. If you want a descriptive model instead of a normative one, try prospect theory.
I just don’t see this article as an axiom that says probabilities of 0 and 1 aren’t allowed in probability theory. I see it as a warning not to put 0s and 1s in your AI’s prior. You’re not changing the math so much as picking good priors.
The way I view that statement is: “In our formalization, agents with absolutely certain beliefs cannot change those beliefs, we want our formalization to capture our intuitive sense of how an ideal agent would update its beliefs, a formalization with a quality of fanaticism does not capture our intuitive sense of how an ideal agent would update its beliefs, therefore we do not want a quality of fanaticism.”
And what state of the world would correspond to the statement “Some people have absolute certainty.” ? Do you think that we can take some highly advanced and entirely fictional neuroimaging technology, look at a brain and meaningfully say, “There’s a belief with probability 1.” ?
And on the other hand, I’m not afraid to talk about folk certainty, where the properties of an ideal mathematical system are less relevant, where everyone can remain blissfully logically uncertain to the fact that beliefs with probability 1 and 0 imply undesirable consequences in formal systems that possess them, and say things like “I believe that absolutely.” I am not afraid to say something like, “That person will not stop believing that for as long as he lives,” and mean that I predict with high confidence that that person will not stop believing that for as long as he lives.
And once you believe that the formalization is trying to capture our intuitive sense of an ideal agent, and decide whether or not that quality of fanaticism captures it, and decide whether or not you’re going to be a stickler about folk language, then I don’t think that any question or confusion around that claim remains.
People are not “ideal agents”. If you specifically construct your formalization to fit your ideas of what an ideal agent should and should not be able to do, this formalization will be a poor fit to actual, live human beings.
So either you make a system for ideal agents—in which case you’ll still run into some problems because, as has been pointed out upthread, standard probability math stops working if you disallow zeros and ones—or you make a system which is applicable to our imperfect world with imperfect humans.
I don’t see why both aren’t useful. If you want a descriptive model instead of a normative one, try prospect theory.
I just don’t see this article as an axiom that says probabilities of 0 and 1 aren’t allowed in probability theory. I see it as a warning not to put 0s and 1s in your AI’s prior. You’re not changing the math so much as picking good priors.