I like this formulation. Personally, I’ve felt that Newcomb’s problem is a bit overly complex and counter-intuitive. Arguably Newcomb’s problem with transparent boxes would be the same as regular Newcomb’s problem, for instance.
Andrew Critch once mentioned a similar problem around rock-paper-scissors and Bayes. The situation was, “Imagine you are playing a game of rock-paper-scissors against an omega who can near-perfectly predict your actions. What should your estimate be for the winning decisions?” The idea was that a Bayesian would have to admit that one has a 33.33333… + delta chance of winning, and then expect that to win in 33.333333 + delta times, but they would predictably win ~0 times, so this showcases a flaw in Bayes. However, it was claimed that Logical Induction would handle this.
Another game that came to mind from your post is Three-card Monte with a dealer who chose randomly but was really good at reading minds.
I definitely would acknowledge this as a nasty flaw in a Bayesian analysis, but could easily imagine that it’s a flaw in the naive use of Bayesian analysis, rather than the ideal.
I was a bit curious about the possibility of imagining what reflective Bayes would look like. Something like,
p(B|(p(B|ω)=0.3333+δ),ω)
In the case of rock-paper-scissors, the agent knows that
p(B|(p(B|ω)=0.3333+δ),ω)=0+γ
It could condition on this, making a much longer claim,
p(B|(p(B|ω)=0.3333+δ),ω,p(B|(p(B|ω)=0.3333+δ),ω))=0+γ)
One obvious issue that comes up is that the justifications for Bayes lie in axioms of probability that clearly are not effectively holding up here. I’d assume that the probability space of some outcomes is not at all a proper measure, as the sum doesn’t equal 1.
I like this formulation. Personally, I’ve felt that Newcomb’s problem is a bit overly complex and counter-intuitive. Arguably Newcomb’s problem with transparent boxes would be the same as regular Newcomb’s problem, for instance.
Andrew Critch once mentioned a similar problem around rock-paper-scissors and Bayes. The situation was, “Imagine you are playing a game of rock-paper-scissors against an omega who can near-perfectly predict your actions. What should your estimate be for the winning decisions?” The idea was that a Bayesian would have to admit that one has a 33.33333… + delta chance of winning, and then expect that to win in 33.333333 + delta times, but they would predictably win ~0 times, so this showcases a flaw in Bayes. However, it was claimed that Logical Induction would handle this.
Another game that came to mind from your post is Three-card Monte with a dealer who chose randomly but was really good at reading minds.
I definitely would acknowledge this as a nasty flaw in a Bayesian analysis, but could easily imagine that it’s a flaw in the naive use of Bayesian analysis, rather than the ideal.
I was a bit curious about the possibility of imagining what reflective Bayes would look like. Something like,
p(B|(p(B|ω)=0.3333+δ),ω)
In the case of rock-paper-scissors, the agent knows that p(B|(p(B|ω)=0.3333+δ),ω)=0+γ
It could condition on this, making a much longer claim, p(B|(p(B|ω)=0.3333+δ),ω,p(B|(p(B|ω)=0.3333+δ),ω))=0+γ)
One obvious issue that comes up is that the justifications for Bayes lie in axioms of probability that clearly are not effectively holding up here. I’d assume that the probability space of some outcomes is not at all a proper measure, as the sum doesn’t equal 1.
Miller’s principle: p(x|p(x) = y) = y
This equation didn’t have a final = and right side.