Sorry, there were two things you could have meant when you said the assumption that the human uses a Bayes net seemed crucial. I thought you were asking why the builder couldn’t just say “That’s unrealistic” when the breaker suggested the human runs a Bayes net. The answer to that is what I said above—because the assumption is that we’re working in the worst case, the builder can’t invoke unrealism to dismiss the counterexample.
If the question is instead “Why is the builder allowed to just focus on the Bayes net case?”, the answer to that is the iterative nature of the game. The Bayes net case (and in practice a few other simple cases) was the case the breaker chose to give, so if the builder finds a strategy that works for that case they win the round. Then the breaker can come back and add complications which break the builder’s strategy again, and the hope is that after many rounds we’ll get to a place where it’s really hard to think of a counterexample that breaks the builder’s strategy despite trying hard.
Ah, that makes sense. In the section where you explain the steps of the game, I interpreted the comments in parentheses as further explanations of the step, rather than just a single example. (In hindsight the latter interpretation is obvious, but I was reading quickly—might be worth making this explicit for others who are doing the same.) So I thought that Bayes nets were built into the methodology. Apologies for the oversight!
I’m still a little wary of how much the report talks about concepts in a humans’ Bayes net without really explaining why this is anywhere near a sensible model of humans, but I’ll have another read through and see if I can pin down anything that I actively disagree with (since I do agree that it’s useful to start off with very simple assumptions).
Ah got it. To be clear, Paul and Mark do in practice consider a bank of multiple counterexamples for each strategy with different ways the human and predictor could think, though they’re all pretty simple in the same way the Bayes net example is (e.g. deduction from a set of axioms); my understanding is that essentially the same kind of counterexamples apply for essentially the same underlying reasons for those other simple examples. The doc sticks with one running example for clarity / length reasons.
Sorry, there were two things you could have meant when you said the assumption that the human uses a Bayes net seemed crucial. I thought you were asking why the builder couldn’t just say “That’s unrealistic” when the breaker suggested the human runs a Bayes net. The answer to that is what I said above—because the assumption is that we’re working in the worst case, the builder can’t invoke unrealism to dismiss the counterexample.
If the question is instead “Why is the builder allowed to just focus on the Bayes net case?”, the answer to that is the iterative nature of the game. The Bayes net case (and in practice a few other simple cases) was the case the breaker chose to give, so if the builder finds a strategy that works for that case they win the round. Then the breaker can come back and add complications which break the builder’s strategy again, and the hope is that after many rounds we’ll get to a place where it’s really hard to think of a counterexample that breaks the builder’s strategy despite trying hard.
Ah, that makes sense. In the section where you explain the steps of the game, I interpreted the comments in parentheses as further explanations of the step, rather than just a single example. (In hindsight the latter interpretation is obvious, but I was reading quickly—might be worth making this explicit for others who are doing the same.) So I thought that Bayes nets were built into the methodology. Apologies for the oversight!
I’m still a little wary of how much the report talks about concepts in a humans’ Bayes net without really explaining why this is anywhere near a sensible model of humans, but I’ll have another read through and see if I can pin down anything that I actively disagree with (since I do agree that it’s useful to start off with very simple assumptions).
Ah got it. To be clear, Paul and Mark do in practice consider a bank of multiple counterexamples for each strategy with different ways the human and predictor could think, though they’re all pretty simple in the same way the Bayes net example is (e.g. deduction from a set of axioms); my understanding is that essentially the same kind of counterexamples apply for essentially the same underlying reasons for those other simple examples. The doc sticks with one running example for clarity / length reasons.