By this setting, you ensure that the goal-keeper isn’t a causal descendant of the LCDT-agent.
Oops! You are right, there is no cutting involved to create C from B in my toy example. Did not realise that. Next time, I need to draw these models on paper before I post, not just in my head.
C and B do work as examples to explore what one might count as deception or non-deception. But my discussion of a random prior above makes sense only if you first extend B to a multi-step model, where the knowledge of the goal keeper explicitly depends on earlier agent actions.
Oops! You are right, there is no cutting involved to create C from B in my toy example. Did not realise that. Next time, I need to draw these models on paper before I post, not just in my head.
C and B do work as examples to explore what one might count as deception or non-deception. But my discussion of a random prior above makes sense only if you first extend B to a multi-step model, where the knowledge of the goal keeper explicitly depends on earlier agent actions.