I wanted to separate what work is done by radicalizing probabilism in general, vs logical induction specifically.
From my perspective, Radical Probabilism is a gateway drug. Explaining logical induction intuitively is hard. Radical Probabilism is easier to explain and motivate. It gives reason to believe that there’s something interesting in the direction. But, as I’ve stated before, I have trouble comprehending how Jeffrey correctly predicted that there’s something interesting here, without logical uncertainty as a motivation. In hindsight, I feel his arguments make a great deal of sense; but without the reward of logical induction waiting at the end of the path, to me this seems like a weird path to decide to go down.
That said, we can try and figure out Jeffrey’s perspective, or, possible perspectives Jeffrey could have had. One point is that he probably thought virtual evidence was extremely useful, and needed to get people to open up to the idea of non-bayesian updates for that reason. I think it’s very possible that he understood his Radical Probabilism purely as a generalization of regular Bayesianism; he may not have recognized the arguments for convergence and other properties. Or, seeing those arguments, he may have replied “those arguments have a similar force for a dogmatic probabilist, too; they’re just harder to satisfy in that case.”
That said, I’m not sure logical inductors properly have beliefs about their own (in the de dicto sense) future beliefs. It doesn’t know “its” source code (though it knows that such code is a possible program) or even that it is being run with the full intuitive meaning of that, so it has no way of doing that.
I totally agree that there’s a philosophical problem here. I’ve put some thought into it. However, I don’t see that it’s a real obstacle to … provisionally … moving forward. Generally I think of the logical inductor as the well-defined mathematical entity and the self-referential beliefs are the logical statements which refer back to that mathematical entity (with all the pros and cons which come from logic—ie, yes, I’m aware that even if we think of the logical inductor as the mathematical entity, rather than the physical implementation, there are formal-semantics questions of whether it’s “really referring to itself”; but it seems quite fine to provisionally set those questions aside).
So, while I agree, I really don’t think it’s cruxy.
From my perspective, Radical Probabilism is a gateway drug.
This post seemed to be praising the virtue of returning to the lower-assumption state. So I argued that in the example given, it took more than knocking out assumptions to get the benefit.
So, while I agree, I really don’t think it’s cruxy.
It wasn’t meant to be. I agree that logical inductors seem to de facto implement a Virtuous Epistemic Process, with attendent properties, whether or not they understand that. I just tend to bring up any interesting-seeming thoughts that are triggered during conversation and could perhaps do better at indicating that. Whether its fine to set it aside provisionally depends on where you want to go from here.
This post seemed to be praising the virtue of returning to the lower-assumption state. So I argued that in the example given, it took more than knocking out assumptions to get the benefit.
Agreed. Simple Bayes is the hero of the story in this post, but that’s more because the simple bayesian can recognize that there’s something beyond.
From my perspective, Radical Probabilism is a gateway drug. Explaining logical induction intuitively is hard. Radical Probabilism is easier to explain and motivate. It gives reason to believe that there’s something interesting in the direction. But, as I’ve stated before, I have trouble comprehending how Jeffrey correctly predicted that there’s something interesting here, without logical uncertainty as a motivation. In hindsight, I feel his arguments make a great deal of sense; but without the reward of logical induction waiting at the end of the path, to me this seems like a weird path to decide to go down.
That said, we can try and figure out Jeffrey’s perspective, or, possible perspectives Jeffrey could have had. One point is that he probably thought virtual evidence was extremely useful, and needed to get people to open up to the idea of non-bayesian updates for that reason. I think it’s very possible that he understood his Radical Probabilism purely as a generalization of regular Bayesianism; he may not have recognized the arguments for convergence and other properties. Or, seeing those arguments, he may have replied “those arguments have a similar force for a dogmatic probabilist, too; they’re just harder to satisfy in that case.”
I totally agree that there’s a philosophical problem here. I’ve put some thought into it. However, I don’t see that it’s a real obstacle to … provisionally … moving forward. Generally I think of the logical inductor as the well-defined mathematical entity and the self-referential beliefs are the logical statements which refer back to that mathematical entity (with all the pros and cons which come from logic—ie, yes, I’m aware that even if we think of the logical inductor as the mathematical entity, rather than the physical implementation, there are formal-semantics questions of whether it’s “really referring to itself”; but it seems quite fine to provisionally set those questions aside).
So, while I agree, I really don’t think it’s cruxy.
This post seemed to be praising the virtue of returning to the lower-assumption state. So I argued that in the example given, it took more than knocking out assumptions to get the benefit.
It wasn’t meant to be. I agree that logical inductors seem to de facto implement a Virtuous Epistemic Process, with attendent properties, whether or not they understand that. I just tend to bring up any interesting-seeming thoughts that are triggered during conversation and could perhaps do better at indicating that. Whether its fine to set it aside provisionally depends on where you want to go from here.
Agreed. Simple Bayes is the hero of the story in this post, but that’s more because the simple bayesian can recognize that there’s something beyond.