I think they violate. The Absent-Minded Driver problem is the simplest example, constructed to violate the independence axiom of vNM. Logical induction also, because the only position fully compatible with Bayes is logical omniscience, and we want to model logical non-omniscience (not knowing all true theorems). To tell an agent what to do in a situation, we need a model of uncertainty for the agent in the situation, which can be as complex as the agent and the situation. Bayesian probability is more of a tractable limit case, like Newtonian mechanics or Nash equilibrium.
These are not violation of Bayesian probability. VNM rationality exists independently of Bayes, logical induction might be a coherent extension of Bayes probability where classical logic (which is the one presupposing omniscience) is not applicable, UDT similarly presupposes logical omniscience, counterfactual mugging is a problem of decision theory, not probability, etc. Let’s keep Bayesian probability, decision theory, VNM rationality, classical logic, etc. all well separated.
If you separate Bayesian probability from decision theory, then it has no justification except self-consistency, and you can no longer say that all correct reasoning must approximate Bayes (which is the claim under discussion).
Sure it does. Haven’t you heard of Cox’s Theorem? It singles out (Bayesian) probability theory as the uniquely determined extension of propositional logic to handle degrees of certainty. There’s also my recent paper, “From Propositional Logic to Plausible Reasoning: A Uniqueness Theorem”
I guess the problematic assumption is that we want to assign degrees of certainty. That doesn’t hold in AMD-like situations. They require reasoning under uncertainty, but any reasoning based on degrees of certainty leads to the wrong answer.
Correct inference must approximate Bayes. Correct reasoning is inference + hypothesis generations / update + what counts as evidence? Decision theories are concerned with the last piece of the puzzle. If I’m wrong, please show me a not obviously wrong theory that violates Bayes theorem...
I think they violate. The Absent-Minded Driver problem is the simplest example, constructed to violate the independence axiom of vNM. Logical induction also, because the only position fully compatible with Bayes is logical omniscience, and we want to model logical non-omniscience (not knowing all true theorems). To tell an agent what to do in a situation, we need a model of uncertainty for the agent in the situation, which can be as complex as the agent and the situation. Bayesian probability is more of a tractable limit case, like Newtonian mechanics or Nash equilibrium.
These are not violation of Bayesian probability. VNM rationality exists independently of Bayes, logical induction might be a coherent extension of Bayes probability where classical logic (which is the one presupposing omniscience) is not applicable, UDT similarly presupposes logical omniscience, counterfactual mugging is a problem of decision theory, not probability, etc.
Let’s keep Bayesian probability, decision theory, VNM rationality, classical logic, etc. all well separated.
If you separate Bayesian probability from decision theory, then it has no justification except self-consistency, and you can no longer say that all correct reasoning must approximate Bayes (which is the claim under discussion).
Sure it does. Haven’t you heard of Cox’s Theorem? It singles out (Bayesian) probability theory as the uniquely determined extension of propositional logic to handle degrees of certainty. There’s also my recent paper, “From Propositional Logic to Plausible Reasoning: A Uniqueness Theorem”
https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fauthors.elsevier.com%2Fa%2F1VIqc%2CKD6ZCKMf&data=02%7C01%7C%7C12e6bb32616e4a953bb808d4bfe40576%7Cfa7b1b5a7b34438794aed2c178decee1%7C0%7C0%7C636344433443102669&sdata=9lY8lw3AEn8Hw5IuPxo2YPcLadVhyXR5b98rULWC8nE%3D&reserved=0
or
https://arxiv.org/abs/1706.05261
I guess the problematic assumption is that we want to assign degrees of certainty. That doesn’t hold in AMD-like situations. They require reasoning under uncertainty, but any reasoning based on degrees of certainty leads to the wrong answer.
Correct inference must approximate Bayes. Correct reasoning is inference + hypothesis generations / update + what counts as evidence?
Decision theories are concerned with the last piece of the puzzle.
If I’m wrong, please show me a not obviously wrong theory that violates Bayes theorem...