It feels to me like you argue from time to time against strawmen:
For axioms, Cox’s theorem merely requires you to accept Boolean algebra and Calculus to be true, and then you can derive probability theory as extended logic from that.
While probability extends basic logic it doesn’t extended advanced logic (predicate calculus) as David Chapman argues in Probability theory does not extend logic.
I’m skeptical about this, and the reason I’m skeptical is because if you really had a method for say, hypothesis generation, this would actually imply logical omniscience, and would basically allow us to create full AGI, RIGHT NOW.
This seems to confuse the idea of having a useful method for hypothesis generation with having a perfect method for hypothesis generation.
As far as I know, being able to do this would imply that P = NP is true, and as far as I know, most computer scientists do not think it’s likely to be true
Saying that you have one unified theory that can give you the correct hypothesis in every case without looking at all alternatives might violate P = NP. On the other hand P = NP doesn’t mean that there aren’t subproblems in which there’s an algorithm for finding a perfect or even good hypothesis.
If P ≠ NP that supports the tool box paradigm. Different tools will perform well for generating hypothesis in different domains and there’s no perfect unified theory.
Is the inability of Bayesian theory to provide a method for providing the correct hypothesis evidence that we can’t use it to analyze and update our own beliefs?
It’s not required for arguing that tool box thinking is better to argue that it’s not possible to analyse and update beliefs with Bayesian thinking.
While probability extends basic logic it doesn’t extended advanced logic (predicate calculus) as David Chapman argues in Probability theory does not extend logic.
I’m not convonced that probability cannot be made to extend to predicate calculus. You need to interpret “for every” and “exists” as transfinite “and” and “or”, but they are not some other abstruse ingredients impossible to fit.
As far as Chapman describes the situations various mathematicians have put a lot of effort into trying to made a system that extends probability from predicate calculus but no one succeeded in creating a coherent system.
There are two ways to disagree with that:
1) Point to a mathematician who actually successfully modeled the extension.
2) Say that no mathematician really tried to do that.
Say that no mathematician really tried to do that.
I tend to lean on this. There has been work to fix and strenghten Cox’s theorem, as also to extend probability to arbitrary preorders or other categories. I’ve yet to see someone try to extend probability to, say, intuitionistic or modal logic.
There are two common types of strawmen arguments that I’ve encountered within this debate.
One is the strawman argument that Bayesians typically give against frequentists, where they show how a particular frequentist test gives the wrong answer on a particular problem, but a straightforward application of Bayes theorem gives the right answer. Frequentists easily counter that a wiser frequentist would have used a different test for this problem that gives the right answer.
The other strawman argument is the one anti-Bayesians make, where they chastise Bayesians for claiming they have the complete theory of rationality / epistemology and no more work needs to be done. This is obviously false, since no Bayesian has ever claimed this, not even Jaynes. A complete theory would need ways to represent hypotheses, and ways to generate them, and the axioms of probability do not make any additional assumptions about what a hypothesis is.
I’m still looking for a well posed inference problem, where a straightforward application of Bayesian principles gives the wrong answer, but a straightforward application of a different set of principles gets the right answer.
This seems a bit motte-and-bailey. In your post, you argue for Bayesianism as a theory of reasoning. Of course you can say that problems that you can’t solve well with Bayesianism aren’t well posed inference problems. Unfortunately, nature doesn’t care about posing well posed inference problems.
Even if Bayesianism is better for a small subject of reasoning problems that doesn’t imply that it’s good to reject tool-boxism.
What you have there is a defence of the Jaynesian variety, but Yudkowsky is making much stronger claims. For instance he thinks Bayes can replace science, but you can’t replace science with inference alone.
Also, if Bayes is inference alone, it can’t be the sole basis of intelligence.
It feels to me like you argue from time to time against strawmen:
While probability extends basic logic it doesn’t extended advanced logic (predicate calculus) as David Chapman argues in Probability theory does not extend logic.
This seems to confuse the idea of having a useful method for hypothesis generation with having a perfect method for hypothesis generation.
Saying that you have one unified theory that can give you the correct hypothesis in every case without looking at all alternatives might violate P = NP. On the other hand P = NP doesn’t mean that there aren’t subproblems in which there’s an algorithm for finding a perfect or even good hypothesis.
If P ≠ NP that supports the tool box paradigm. Different tools will perform well for generating hypothesis in different domains and there’s no perfect unified theory.
It’s not required for arguing that tool box thinking is better to argue that it’s not possible to analyse and update beliefs with Bayesian thinking.
I’m not convonced that probability cannot be made to extend to predicate calculus. You need to interpret “for every” and “exists” as transfinite “and” and “or”, but they are not some other abstruse ingredients impossible to fit.
As far as Chapman describes the situations various mathematicians have put a lot of effort into trying to made a system that extends probability from predicate calculus but no one succeeded in creating a coherent system.
There are two ways to disagree with that: 1) Point to a mathematician who actually successfully modeled the extension. 2) Say that no mathematician really tried to do that.
I tend to lean on this. There has been work to fix and strenghten Cox’s theorem, as also to extend probability to arbitrary preorders or other categories. I’ve yet to see someone try to extend probability to, say, intuitionistic or modal logic.
There are two common types of strawmen arguments that I’ve encountered within this debate.
One is the strawman argument that Bayesians typically give against frequentists, where they show how a particular frequentist test gives the wrong answer on a particular problem, but a straightforward application of Bayes theorem gives the right answer. Frequentists easily counter that a wiser frequentist would have used a different test for this problem that gives the right answer.
The other strawman argument is the one anti-Bayesians make, where they chastise Bayesians for claiming they have the complete theory of rationality / epistemology and no more work needs to be done. This is obviously false, since no Bayesian has ever claimed this, not even Jaynes. A complete theory would need ways to represent hypotheses, and ways to generate them, and the axioms of probability do not make any additional assumptions about what a hypothesis is.
I’m still looking for a well posed inference problem, where a straightforward application of Bayesian principles gives the wrong answer, but a straightforward application of a different set of principles gets the right answer.
This seems a bit motte-and-bailey. In your post, you argue for Bayesianism as a theory of reasoning. Of course you can say that problems that you can’t solve well with Bayesianism aren’t well posed inference problems. Unfortunately, nature doesn’t care about posing well posed inference problems.
Even if Bayesianism is better for a small subject of reasoning problems that doesn’t imply that it’s good to reject tool-boxism.
Yep. If Bayes only does one thing. you need other tools to do the other jobs. Which, by the way, implies nothing about converging, or not, on truth.
Bayesian has more than on or meaning.
What you have there is a defence of the Jaynesian variety, but Yudkowsky is making much stronger claims. For instance he thinks Bayes can replace science, but you can’t replace science with inference alone.
Also, if Bayes is inference alone, it can’t be the sole basis of intelligence.