What is actually left of Bayesianism after Radical Probabilism? Your original post on it was partially explaining logical induction, and introduced assumptions from that in much the same way as you describe here. But without that, there doesn’t seem to be a whole lot there. The idea is that all that matters is resistance to dutch books, and for a dutch book to be fair the bookie must not have an epistemic advantage over the agent. Said that way, it depends on some notion of “what the agent could have known at the time”, and giving a coherent account of this would require solving epistemology in general. So we avoid this problem by instead taking “what the agent actually knew (believed) at the time”, which is a subset and so also fair. But this doesn’t do any work, it just offloads it to agent design.
Part of the problem is that I avoided getting too technical in Radical Probabilism, so I bounced back and forth between different possible versions of Radical Probabilism without too much signposting.
I can distinguish at least three versions:
Jeffrey’s version. I don’t have a good source for his full picture. I get the sense that the answer to “what is left?” is “very little!”—EG, he didn’t think agents have to be able to articulate probabilities. But I am not sure of the details.
The simplification of Jeffrey’s version, where I keep the Kolmogorov axioms (or the Jeffrey-Bolker axioms) but reject Bayesian updates.
Skyrms’ deliberation dynamics. This is a pretty cool framework and I recommend checking it out (perhaps via his book The Dynamics of Rational Deliberation). The basic idea of its non-bayesian updates is, it’s fine so long as you’re “improving” (moving towards something good).
The version represented by logical induction.
The Shafer & Vovk version. I’m not really familiar with this version, but I hear it’s pretty good.
(I can think of more, but I cut myself off.)
Said that way, it depends on some notion of “what the agent could have known at the time”, and giving a coherent account of this would require solving epistemology in general.
Making a broad generalization, I’m going to stick things into camp #2 above or camp #4. Theories in camp #2 have the feature that they simply assume a solid notion of “what the agent could have known at the time”. This allows for a nice simple picture in which we can check Dutch Book arguments. However, it does lend itself more easily to logical omniscience, since it doesn’t allow a nuanced picture of how much logical information the agent can generate. Camp #4 means we do give such a nuanced picture, such as the poly-time assumption.
Either way, we’ve made assumptions which tell us which Dutch Books are valid. We can then check what follows.
For example with logical induction, we know that it can’t be dutch booked by any polynomial-time trader. Why do we think that criterion is important? Because we think its realistic for an agent to in the limit know anything you can figure out in polynomial time. And we think that because we have an algorithm that does it. Ok, but what intellectual progress does the dutch book argument make here? We had to first find out what one can realistically know, and got logical induction, from which we could make the poly-time criterion. So now we know its fair to judge agents by that criterion, so we should find one, which fortunately we already have. But we could also just not have thought about dutch books at all, and just tried to figure out what one could realistically know, and what would we have lost? Making the dutch book here seems like a spandrel in thinking style.
I think this understates the importance of the Dutch-book idea to the actual construction of the logical induction algorithm. The criterion came first, and the construction was finished soon after. So the hard part was the criterion (which is conceived in dutch-book terms). And then the construction follows nicely from the idea of avoiding these dutch-books.
Plus, logical induction without the criterion would be much less interesting. The criterion implies all sorts of nice properties. Without the criterion, we could point to all the nice properties the logical induction algorithm has, but it would just be a disorganized mess of properties. Someone would be right to ask if there’s an underlying reason for all these nice properties—an organizing principle, rather than just a list of seemingly nice properties. The answer to that question would be “dutch books”.
BTW, I believe philosophers currently look down on dutch books for being too pragmatic/adversarial a justification, and favor newer approaches which justify epistemics from a plain desire to be correct rather than a desire to not be exploitable. So by no means should we assume that Dutch Books are the only way. However, I personally feel that logical induction is strong evidence that Dutch Books are an important organizing principle.
As a side note, I reread Radical Probabilism for this, and everything in the “Other Rationality Properties” section seems pretty shaky to me. Both the proofs of both convergence and calibration as written depend on logical induction—or else, the assumption that the agent would know if its not convergent/calibrated, in which case could orthodoxy not achieve the same? You acknowledge this for convergence in a comment but also hint at another proof. But if radical probabilism is a generalization of orthodox bayesianism, then how can it have guarantees that the latter doesn’t?
You’re right to call out the contradiction between calling radical probabilism a generalization, vs claiming that it implies new restrictions. I should have been more consistent about that. Radical Probabilism is merely “mostly a generalization”.
I still haven’t learned about how #2-style settings deal with calibration and convergence, so I can’t really comment on the other proofs I implied the existence of. But, yeah, it means there are extra rationality conditions beyond just the Kolmogorov axioms.
For the conservation of expected evidence, note that the proof here involves a bet on what the agents future beliefs will be. This is a fragile construction: you need to make sure the agent can’t troll the bookie, without assuming the accessability of the structures you want to establish. It also assumes the agent has models of itself in its hypothesis space. And even in the weaker forms, the result seems unrealistic. There is the problem with psychedelics that the “virtuous epistemic process” is supposed to address, but this is something that the formalism allows for with a free parameter, not something it solves. The radical probabilist trusts the sequence of Pi, but it doesn’t say anything about where they come from. You can now assert that it can’t be identified with particular physical processes, but that just leaves a big questionmark for bridging laws. If you want to check if there are dutch books against your virtuous epistemic process, you have to be able to identify its future members. Now I can’t exclude that some process could avoid all dutch books against it without knowing where they are (and without being some trivial stupidity), but it seems like a pretty heavy demand.
This part seems entirely addressed by logical induction, to me.
A “virtuous epistemic process” is a logical inductor. We know logical inductors come to trust their future opinions (without knowing specifically what they will be).
The logical induction algorithm tells us where the future beliefs come from.
The logical induction algorithm shows how to have models of yourself.
The logical induction algorithm shows how to avoid all dutch books “without knowing where they are” (actually I don’t know what you meant by this)
Either way, we’ve made assumptions which tell us which Dutch Books are valid. We can then check what follows.
Ok. I suppose my point could then be made as “#2 type approaches aren’t very useful, because they assume something thats no easier than what they provide”.
I think this understates the importance of the Dutch-book idea to the actual construction of the logical induction algorithm.
Well, you certainly know more about that than me. Where did the criterion come from in your view?
This part seems entirely addressed by logical induction, to me.
Quite possibly. I wanted to separate what work is done by radicalizing probabilism in general, vs logical induction specifically. That said, I’m not sure logical inductors properly have beliefs about their own (in the de dicto sense) future beliefs. It doesn’t know “its” source code (though it knows that such code is a possible program) or even that it is being run with the full intuitive meaning of that, so it has no way of doing that. Rather, it would at some point think about the source code that we know is its, and come to believe that that program gives reliable results—but only in the same way in which it comes to trust other logical inductors. It seems like a version of this in the logical setting.
By “knowing where they are”, I mean strategies that avoid getting dutch-booked without doing anything that looks like “looking for dutch books against me”. One example of that would be The Process That Believes Everything Is Independent And Therefore Never Updates, but thats a trivial stupidity.
I wanted to separate what work is done by radicalizing probabilism in general, vs logical induction specifically.
From my perspective, Radical Probabilism is a gateway drug. Explaining logical induction intuitively is hard. Radical Probabilism is easier to explain and motivate. It gives reason to believe that there’s something interesting in the direction. But, as I’ve stated before, I have trouble comprehending how Jeffrey correctly predicted that there’s something interesting here, without logical uncertainty as a motivation. In hindsight, I feel his arguments make a great deal of sense; but without the reward of logical induction waiting at the end of the path, to me this seems like a weird path to decide to go down.
That said, we can try and figure out Jeffrey’s perspective, or, possible perspectives Jeffrey could have had. One point is that he probably thought virtual evidence was extremely useful, and needed to get people to open up to the idea of non-bayesian updates for that reason. I think it’s very possible that he understood his Radical Probabilism purely as a generalization of regular Bayesianism; he may not have recognized the arguments for convergence and other properties. Or, seeing those arguments, he may have replied “those arguments have a similar force for a dogmatic probabilist, too; they’re just harder to satisfy in that case.”
That said, I’m not sure logical inductors properly have beliefs about their own (in the de dicto sense) future beliefs. It doesn’t know “its” source code (though it knows that such code is a possible program) or even that it is being run with the full intuitive meaning of that, so it has no way of doing that.
I totally agree that there’s a philosophical problem here. I’ve put some thought into it. However, I don’t see that it’s a real obstacle to … provisionally … moving forward. Generally I think of the logical inductor as the well-defined mathematical entity and the self-referential beliefs are the logical statements which refer back to that mathematical entity (with all the pros and cons which come from logic—ie, yes, I’m aware that even if we think of the logical inductor as the mathematical entity, rather than the physical implementation, there are formal-semantics questions of whether it’s “really referring to itself”; but it seems quite fine to provisionally set those questions aside).
So, while I agree, I really don’t think it’s cruxy.
From my perspective, Radical Probabilism is a gateway drug.
This post seemed to be praising the virtue of returning to the lower-assumption state. So I argued that in the example given, it took more than knocking out assumptions to get the benefit.
So, while I agree, I really don’t think it’s cruxy.
It wasn’t meant to be. I agree that logical inductors seem to de facto implement a Virtuous Epistemic Process, with attendent properties, whether or not they understand that. I just tend to bring up any interesting-seeming thoughts that are triggered during conversation and could perhaps do better at indicating that. Whether its fine to set it aside provisionally depends on where you want to go from here.
This post seemed to be praising the virtue of returning to the lower-assumption state. So I argued that in the example given, it took more than knocking out assumptions to get the benefit.
Agreed. Simple Bayes is the hero of the story in this post, but that’s more because the simple bayesian can recognize that there’s something beyond.
Part of the problem is that I avoided getting too technical in Radical Probabilism, so I bounced back and forth between different possible versions of Radical Probabilism without too much signposting.
I can distinguish at least three versions:
Jeffrey’s version. I don’t have a good source for his full picture. I get the sense that the answer to “what is left?” is “very little!”—EG, he didn’t think agents have to be able to articulate probabilities. But I am not sure of the details.
The simplification of Jeffrey’s version, where I keep the Kolmogorov axioms (or the Jeffrey-Bolker axioms) but reject Bayesian updates.
Skyrms’ deliberation dynamics. This is a pretty cool framework and I recommend checking it out (perhaps via his book The Dynamics of Rational Deliberation). The basic idea of its non-bayesian updates is, it’s fine so long as you’re “improving” (moving towards something good).
The version represented by logical induction.
The Shafer & Vovk version. I’m not really familiar with this version, but I hear it’s pretty good.
(I can think of more, but I cut myself off.)
Making a broad generalization, I’m going to stick things into camp #2 above or camp #4. Theories in camp #2 have the feature that they simply assume a solid notion of “what the agent could have known at the time”. This allows for a nice simple picture in which we can check Dutch Book arguments. However, it does lend itself more easily to logical omniscience, since it doesn’t allow a nuanced picture of how much logical information the agent can generate. Camp #4 means we do give such a nuanced picture, such as the poly-time assumption.
Either way, we’ve made assumptions which tell us which Dutch Books are valid. We can then check what follows.
I think this understates the importance of the Dutch-book idea to the actual construction of the logical induction algorithm. The criterion came first, and the construction was finished soon after. So the hard part was the criterion (which is conceived in dutch-book terms). And then the construction follows nicely from the idea of avoiding these dutch-books.
Plus, logical induction without the criterion would be much less interesting. The criterion implies all sorts of nice properties. Without the criterion, we could point to all the nice properties the logical induction algorithm has, but it would just be a disorganized mess of properties. Someone would be right to ask if there’s an underlying reason for all these nice properties—an organizing principle, rather than just a list of seemingly nice properties. The answer to that question would be “dutch books”.
BTW, I believe philosophers currently look down on dutch books for being too pragmatic/adversarial a justification, and favor newer approaches which justify epistemics from a plain desire to be correct rather than a desire to not be exploitable. So by no means should we assume that Dutch Books are the only way. However, I personally feel that logical induction is strong evidence that Dutch Books are an important organizing principle.
You’re right to call out the contradiction between calling radical probabilism a generalization, vs claiming that it implies new restrictions. I should have been more consistent about that. Radical Probabilism is merely “mostly a generalization”.
I still haven’t learned about how #2-style settings deal with calibration and convergence, so I can’t really comment on the other proofs I implied the existence of. But, yeah, it means there are extra rationality conditions beyond just the Kolmogorov axioms.
This part seems entirely addressed by logical induction, to me.
A “virtuous epistemic process” is a logical inductor. We know logical inductors come to trust their future opinions (without knowing specifically what they will be).
The logical induction algorithm tells us where the future beliefs come from.
The logical induction algorithm shows how to have models of yourself.
The logical induction algorithm shows how to avoid all dutch books “without knowing where they are” (actually I don’t know what you meant by this)
Ok. I suppose my point could then be made as “#2 type approaches aren’t very useful, because they assume something thats no easier than what they provide”.
Well, you certainly know more about that than me. Where did the criterion come from in your view?
Quite possibly. I wanted to separate what work is done by radicalizing probabilism in general, vs logical induction specifically. That said, I’m not sure logical inductors properly have beliefs about their own (in the de dicto sense) future beliefs. It doesn’t know “its” source code (though it knows that such code is a possible program) or even that it is being run with the full intuitive meaning of that, so it has no way of doing that. Rather, it would at some point think about the source code that we know is its, and come to believe that that program gives reliable results—but only in the same way in which it comes to trust other logical inductors. It seems like a version of this in the logical setting.
By “knowing where they are”, I mean strategies that avoid getting dutch-booked without doing anything that looks like “looking for dutch books against me”. One example of that would be The Process That Believes Everything Is Independent And Therefore Never Updates, but thats a trivial stupidity.
From my perspective, Radical Probabilism is a gateway drug. Explaining logical induction intuitively is hard. Radical Probabilism is easier to explain and motivate. It gives reason to believe that there’s something interesting in the direction. But, as I’ve stated before, I have trouble comprehending how Jeffrey correctly predicted that there’s something interesting here, without logical uncertainty as a motivation. In hindsight, I feel his arguments make a great deal of sense; but without the reward of logical induction waiting at the end of the path, to me this seems like a weird path to decide to go down.
That said, we can try and figure out Jeffrey’s perspective, or, possible perspectives Jeffrey could have had. One point is that he probably thought virtual evidence was extremely useful, and needed to get people to open up to the idea of non-bayesian updates for that reason. I think it’s very possible that he understood his Radical Probabilism purely as a generalization of regular Bayesianism; he may not have recognized the arguments for convergence and other properties. Or, seeing those arguments, he may have replied “those arguments have a similar force for a dogmatic probabilist, too; they’re just harder to satisfy in that case.”
I totally agree that there’s a philosophical problem here. I’ve put some thought into it. However, I don’t see that it’s a real obstacle to … provisionally … moving forward. Generally I think of the logical inductor as the well-defined mathematical entity and the self-referential beliefs are the logical statements which refer back to that mathematical entity (with all the pros and cons which come from logic—ie, yes, I’m aware that even if we think of the logical inductor as the mathematical entity, rather than the physical implementation, there are formal-semantics questions of whether it’s “really referring to itself”; but it seems quite fine to provisionally set those questions aside).
So, while I agree, I really don’t think it’s cruxy.
This post seemed to be praising the virtue of returning to the lower-assumption state. So I argued that in the example given, it took more than knocking out assumptions to get the benefit.
It wasn’t meant to be. I agree that logical inductors seem to de facto implement a Virtuous Epistemic Process, with attendent properties, whether or not they understand that. I just tend to bring up any interesting-seeming thoughts that are triggered during conversation and could perhaps do better at indicating that. Whether its fine to set it aside provisionally depends on where you want to go from here.
Agreed. Simple Bayes is the hero of the story in this post, but that’s more because the simple bayesian can recognize that there’s something beyond.