Updating by Bayesian conditionalization does assume that you are treating E as if its probability is now 1. If you want an update rule that is consistent with maintaining uncertainty about E, one proposal is Jeffrey conditionalization. If P1 is your initial (pre-evidential) distribution, and P2 is the updated distribution, then Jeffrey conditionalization says:
P2(H) = P1(H | E) P2(E) + P1(H | ~E) P2(~E).
Obviously, this reduces to Bayesian conditionalization when P2(E) = 1.
Yeah, the problem i have with that though is that I’m left asking: why did I change my probability in that? Is it because i updated on something else? Was I certain of that something else? If not, then why did I change my probability of that something else, and on we go down the rabbit hole of an infinite regress.
The infinite regress is anticipated in one of your priors.
You’re playing a game. Variant A of an enemy attacks high most of the time, variant B of an enemy attacks low some of the time; the rest of the time they both do forward attacks. We have priors, which we can arbitrary set at any value. The enemy does a forward attack; here, we assign 100% probability to our observation of the forward attack. But let’s say we see it out of the corner of our eye; in that case, we might assign 60% probability to the forward attack, but we still have 100% probability on the observation itself. Add an unreliable witness recounting the attack they saw out of the corner of their eye; we might assign 50% probability to that they’re telling the truth, but 100% probability that we heard them. Add in a hearing problem; now we might assume 90% probability we heard them correctly, but 100% probability that we heard them at all.
We can keep adding levels of uncertainty, true. Eventually we will arrive at the demon-that-is-deliberately-deceiving-us thing Descartes talks about, at which point we can’t be certain of anything except our own existence.
Infinite regress results in absolutely no certainty. But infinite regress isn’t useful; lack of certainty isn’t useful. We can’t prove the existence of the universe, but we can see, quite obviously, the usefulness of assuming the universe does exist. Which is to say, probability doesn’t exist in a vacuum; it serves a purpose.
Or, to approach it another way: Godel. We can’t be absolutely certain of our probabilities because at least one of our probabilities must be axiomatic.
Presumably because you got some new information. If there is no information, there is no update. If the information is uncertain, make appropriate adjustments. The “infinite regress” would either converge to some limit or you’ll end up, as OrphanWilde says, with Descartes’ deceiving demon at which point you don’t know anything and just stand there slack-jawed till someone runs you over.
Updating by Bayesian conditionalization does assume that you are treating E as if its probability is now 1. If you want an update rule that is consistent with maintaining uncertainty about E, one proposal is Jeffrey conditionalization. If P1 is your initial (pre-evidential) distribution, and P2 is the updated distribution, then Jeffrey conditionalization says:
P2(H) = P1(H | E) P2(E) + P1(H | ~E) P2(~E).
Obviously, this reduces to Bayesian conditionalization when P2(E) = 1.
Yeah, the problem i have with that though is that I’m left asking: why did I change my probability in that? Is it because i updated on something else? Was I certain of that something else? If not, then why did I change my probability of that something else, and on we go down the rabbit hole of an infinite regress.
The infinite regress is anticipated in one of your priors.
You’re playing a game. Variant A of an enemy attacks high most of the time, variant B of an enemy attacks low some of the time; the rest of the time they both do forward attacks. We have priors, which we can arbitrary set at any value. The enemy does a forward attack; here, we assign 100% probability to our observation of the forward attack. But let’s say we see it out of the corner of our eye; in that case, we might assign 60% probability to the forward attack, but we still have 100% probability on the observation itself. Add an unreliable witness recounting the attack they saw out of the corner of their eye; we might assign 50% probability to that they’re telling the truth, but 100% probability that we heard them. Add in a hearing problem; now we might assume 90% probability we heard them correctly, but 100% probability that we heard them at all.
We can keep adding levels of uncertainty, true. Eventually we will arrive at the demon-that-is-deliberately-deceiving-us thing Descartes talks about, at which point we can’t be certain of anything except our own existence.
Infinite regress results in absolutely no certainty. But infinite regress isn’t useful; lack of certainty isn’t useful. We can’t prove the existence of the universe, but we can see, quite obviously, the usefulness of assuming the universe does exist. Which is to say, probability doesn’t exist in a vacuum; it serves a purpose.
Or, to approach it another way: Godel. We can’t be absolutely certain of our probabilities because at least one of our probabilities must be axiomatic.
Presumably because you got some new information. If there is no information, there is no update. If the information is uncertain, make appropriate adjustments. The “infinite regress” would either converge to some limit or you’ll end up, as OrphanWilde says, with Descartes’ deceiving demon at which point you don’t know anything and just stand there slack-jawed till someone runs you over.