Keeping in my tradition of telling people to be less confident...
I strongly agree that the world is built on logic that can be understood by the individual human mind. And I think it’s likely that there are simple principles for correct reasoning, which might lead to intelligence explosion. Yay to you for resisting backwards drift on that!
But maybe let’s not tie that to the idea that all correct reasoning must approximate Bayes. Ironically, LW is the best source for arguments why Bayesian probability is itself an approximation to some more precise theory of uncertainty (UDT, Absent-Minded Driver, Psy-Kosh’s problem, Counterfactual Mugging, etc) and the many problems that remain even then (nature of observation, nature of priors, logical uncertainty, etc). In the end, a theory of uncertainty doesn’t just have to be correct in itself, it must also accurately model uncertainty, so it’s tied up with what it means to be an agent. We haven’t even scratched the surface of that.
During a physics lecture on quantum mechanics I was in once, the professor stated that theories like quantum field theory, string theory, and other types of quantum gravity were contained within plain quantum mechanics, because all of them had to work within the quantum framework (in the sense that they were quantum mechanics with more assumptions added).
I wonder if something similar is true for Bayesian probability, and the theories like UDT, Logical induction and things like that. Do any of these extensions violate Bayesian principles, making them overlap with them rather than contain them?
I think they violate. The Absent-Minded Driver problem is the simplest example, constructed to violate the independence axiom of vNM. Logical induction also, because the only position fully compatible with Bayes is logical omniscience, and we want to model logical non-omniscience (not knowing all true theorems). To tell an agent what to do in a situation, we need a model of uncertainty for the agent in the situation, which can be as complex as the agent and the situation. Bayesian probability is more of a tractable limit case, like Newtonian mechanics or Nash equilibrium.
These are not violation of Bayesian probability. VNM rationality exists independently of Bayes, logical induction might be a coherent extension of Bayes probability where classical logic (which is the one presupposing omniscience) is not applicable, UDT similarly presupposes logical omniscience, counterfactual mugging is a problem of decision theory, not probability, etc. Let’s keep Bayesian probability, decision theory, VNM rationality, classical logic, etc. all well separated.
If you separate Bayesian probability from decision theory, then it has no justification except self-consistency, and you can no longer say that all correct reasoning must approximate Bayes (which is the claim under discussion).
Sure it does. Haven’t you heard of Cox’s Theorem? It singles out (Bayesian) probability theory as the uniquely determined extension of propositional logic to handle degrees of certainty. There’s also my recent paper, “From Propositional Logic to Plausible Reasoning: A Uniqueness Theorem”
I guess the problematic assumption is that we want to assign degrees of certainty. That doesn’t hold in AMD-like situations. They require reasoning under uncertainty, but any reasoning based on degrees of certainty leads to the wrong answer.
Correct inference must approximate Bayes. Correct reasoning is inference + hypothesis generations / update + what counts as evidence? Decision theories are concerned with the last piece of the puzzle. If I’m wrong, please show me a not obviously wrong theory that violates Bayes theorem...
Keeping in my tradition of telling people to be less confident...
I strongly agree that the world is built on logic that can be understood by the individual human mind. And I think it’s likely that there are simple principles for correct reasoning, which might lead to intelligence explosion. Yay to you for resisting backwards drift on that!
But maybe let’s not tie that to the idea that all correct reasoning must approximate Bayes. Ironically, LW is the best source for arguments why Bayesian probability is itself an approximation to some more precise theory of uncertainty (UDT, Absent-Minded Driver, Psy-Kosh’s problem, Counterfactual Mugging, etc) and the many problems that remain even then (nature of observation, nature of priors, logical uncertainty, etc). In the end, a theory of uncertainty doesn’t just have to be correct in itself, it must also accurately model uncertainty, so it’s tied up with what it means to be an agent. We haven’t even scratched the surface of that.
During a physics lecture on quantum mechanics I was in once, the professor stated that theories like quantum field theory, string theory, and other types of quantum gravity were contained within plain quantum mechanics, because all of them had to work within the quantum framework (in the sense that they were quantum mechanics with more assumptions added).
I wonder if something similar is true for Bayesian probability, and the theories like UDT, Logical induction and things like that. Do any of these extensions violate Bayesian principles, making them overlap with them rather than contain them?
I think they violate. The Absent-Minded Driver problem is the simplest example, constructed to violate the independence axiom of vNM. Logical induction also, because the only position fully compatible with Bayes is logical omniscience, and we want to model logical non-omniscience (not knowing all true theorems). To tell an agent what to do in a situation, we need a model of uncertainty for the agent in the situation, which can be as complex as the agent and the situation. Bayesian probability is more of a tractable limit case, like Newtonian mechanics or Nash equilibrium.
These are not violation of Bayesian probability. VNM rationality exists independently of Bayes, logical induction might be a coherent extension of Bayes probability where classical logic (which is the one presupposing omniscience) is not applicable, UDT similarly presupposes logical omniscience, counterfactual mugging is a problem of decision theory, not probability, etc.
Let’s keep Bayesian probability, decision theory, VNM rationality, classical logic, etc. all well separated.
If you separate Bayesian probability from decision theory, then it has no justification except self-consistency, and you can no longer say that all correct reasoning must approximate Bayes (which is the claim under discussion).
Sure it does. Haven’t you heard of Cox’s Theorem? It singles out (Bayesian) probability theory as the uniquely determined extension of propositional logic to handle degrees of certainty. There’s also my recent paper, “From Propositional Logic to Plausible Reasoning: A Uniqueness Theorem”
https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fauthors.elsevier.com%2Fa%2F1VIqc%2CKD6ZCKMf&data=02%7C01%7C%7C12e6bb32616e4a953bb808d4bfe40576%7Cfa7b1b5a7b34438794aed2c178decee1%7C0%7C0%7C636344433443102669&sdata=9lY8lw3AEn8Hw5IuPxo2YPcLadVhyXR5b98rULWC8nE%3D&reserved=0
or
https://arxiv.org/abs/1706.05261
I guess the problematic assumption is that we want to assign degrees of certainty. That doesn’t hold in AMD-like situations. They require reasoning under uncertainty, but any reasoning based on degrees of certainty leads to the wrong answer.
Correct inference must approximate Bayes. Correct reasoning is inference + hypothesis generations / update + what counts as evidence?
Decision theories are concerned with the last piece of the puzzle.
If I’m wrong, please show me a not obviously wrong theory that violates Bayes theorem...