Thanks! That answers a lot of my questions even without a concrete example.
I found this part of your reply particularly interesting:
if you don’t have (2), updates are not very constrained by Dutch-book type rationality. So in general, Jeffrey argued that there are many valid updates beyond Bayes and Jeffrey updates.
The abstract example I came up with after reading that was something like ‘I think A at 60%. If I observe X, then I’d update to A at 70%. If I observe Y, then I’d update to A at 40%. If I observe Z, I don’t know what I’d think.’.
I think what’s a little confusing is that I imagined these kinds of adjustments were already incorporated into ‘Bayesian reasoning’. Like, for the canonical ‘cancer test result’ example, we could easily adjust our understanding of ‘receives a positive test result’ to include uncertainty about the evidence itself, e.g. maybe the test was performed incorrectly or the result was misreported by the lab.
Do the ‘same’ priors cover our ‘base’ credence of different types of evidence? How are probabilities reasonably, or practically, assigned or calculated for different types of evidence? (Do we need to further adjust our confidence of those assignment or calculations?)
Maybe I do still need a concrete example to reach a decent understanding.
Richard Bradley gives an example of a non-Bayes non-Jeffrey update in Radical Probabilism and Bayesian Conditioning. He calls his third type of update Adams conditioning. But he goes even further, giving an example which is not Bayes, Jeffrey, or Adams (the example with the pipes toward the end; figure 1 and accompanying text). To be honest I still find the example a bit baffling, because I’m not clear on why we’re allowed to predictably violate the rigidity constraint in the case he considers.
I think what’s a little confusing is that I imagined these kinds of adjustments were already incorporated into ‘Bayesian reasoning’. Like, for the canonical ‘cancer test result’ example, we could easily adjust our understanding of ‘receives a positive test result’ to include uncertainty about the evidence itself, e.g. maybe the test was performed incorrectly or the result was misreported by the lab.
We can always invent a classically-bayesian scenario where we’re uncertain about some particular X, by making it so we can’t directly observe X, but rather get some other observations. EG, if we can’t directly observe the test results but we’re told about it through a fallible line of communication. What’s radical about Jeffrey’s view is to allow the observations themselves to be uncertain. So if you look at e.g. a color but aren’t sure what you’re looking at, you don’t have to contrive a color-like proposition which you do observe in order to record your imperfect observation of color.
You can think of radical probabilism as “Bayesianism at a distance”: like if you were watching a Bayesian agent, but couldn’t bother to record every single little sense-datum. You want to record that the test results are probably positive, without recording your actual observations that make you think that. We can always posit underlying observations which make the radical-probabilist agent classically Bayesian. Think of Jeffrey as pointing out that it’s often easier to work “at a distance” instead, and than once you start thinking this way, you can see it’s closer to your conscious experience anyway—so why posit underlying propositions which make all your updates into Bayes updates?
As for me, I have no problem with supposing the existence of such underlying propositions (I’ll be making a post elaborating on that at some point...) but find radical probabilism to nonetheless be a very philosophically significant point.
Thanks! That answers a lot of my questions even without a concrete example.
I found this part of your reply particularly interesting:
The abstract example I came up with after reading that was something like ‘I think A at 60%. If I observe X, then I’d update to A at 70%. If I observe Y, then I’d update to A at 40%. If I observe Z, I don’t know what I’d think.’.
I think what’s a little confusing is that I imagined these kinds of adjustments were already incorporated into ‘Bayesian reasoning’. Like, for the canonical ‘cancer test result’ example, we could easily adjust our understanding of ‘receives a positive test result’ to include uncertainty about the evidence itself, e.g. maybe the test was performed incorrectly or the result was misreported by the lab.
Do the ‘same’ priors cover our ‘base’ credence of different types of evidence? How are probabilities reasonably, or practically, assigned or calculated for different types of evidence? (Do we need to further adjust our confidence of those assignment or calculations?)
Maybe I do still need a concrete example to reach a decent understanding.
Richard Bradley gives an example of a non-Bayes non-Jeffrey update in Radical Probabilism and Bayesian Conditioning. He calls his third type of update Adams conditioning. But he goes even further, giving an example which is not Bayes, Jeffrey, or Adams (the example with the pipes toward the end; figure 1 and accompanying text). To be honest I still find the example a bit baffling, because I’m not clear on why we’re allowed to predictably violate the rigidity constraint in the case he considers.
We can always invent a classically-bayesian scenario where we’re uncertain about some particular X, by making it so we can’t directly observe X, but rather get some other observations. EG, if we can’t directly observe the test results but we’re told about it through a fallible line of communication. What’s radical about Jeffrey’s view is to allow the observations themselves to be uncertain. So if you look at e.g. a color but aren’t sure what you’re looking at, you don’t have to contrive a color-like proposition which you do observe in order to record your imperfect observation of color.
You can think of radical probabilism as “Bayesianism at a distance”: like if you were watching a Bayesian agent, but couldn’t bother to record every single little sense-datum. You want to record that the test results are probably positive, without recording your actual observations that make you think that. We can always posit underlying observations which make the radical-probabilist agent classically Bayesian. Think of Jeffrey as pointing out that it’s often easier to work “at a distance” instead, and than once you start thinking this way, you can see it’s closer to your conscious experience anyway—so why posit underlying propositions which make all your updates into Bayes updates?
As for me, I have no problem with supposing the existence of such underlying propositions (I’ll be making a post elaborating on that at some point...) but find radical probabilism to nonetheless be a very philosophically significant point.
Thanks again!
Your point about “Bayesianism at a distance” makes a lot of sense.