That’s a very interesting analysis. I think you are taking the point of view that sorcerers are rational, or that they are optimizing solely for proving or disproving their existence. That wasn’t my assumption. Sorcerers are mysterious, so people can’t expect their cooperation in an experiment designed for this purpose. Even under your assumption you can never distinguish between Bright and Dark existing: they could behave identically, to convince you that Bright exists. Dark would sort the deck whenever you query for Bright, for instance.
The way I was thinking about it is that you have other beliefs about sorcerers and your evidence for their existence is primarily established based on other grounds (e.g. see my comment about kittens in another thread). Then Bob and Daisy take into account the fact that Bright and Dark have these additional peculiar preferences for people’s belief in them.
The assumption of rationality is usually used to get a tractable game. That said, the assumption is not as restrictive as you seem to say. A rational sorcerer isn’t obliged to cooperate with you, and can have other goals as well. For example, in my game we could give Dark a strong desire to move the ace of spades to the top of the deck, and that desire could have a certain weight compared to the desire to stay hidden. In the resulting game, Daisy would still use only the information from the deck, and wouldn’t need to do Bayesian updates based on her own state of mind. Does that answer your question?
That’s a very interesting analysis. I think you are taking the point of view that sorcerers are rational, or that they are optimizing solely for proving or disproving their existence. That wasn’t my assumption. Sorcerers are mysterious, so people can’t expect their cooperation in an experiment designed for this purpose. Even under your assumption you can never distinguish between Bright and Dark existing: they could behave identically, to convince you that Bright exists. Dark would sort the deck whenever you query for Bright, for instance.
The way I was thinking about it is that you have other beliefs about sorcerers and your evidence for their existence is primarily established based on other grounds (e.g. see my comment about kittens in another thread). Then Bob and Daisy take into account the fact that Bright and Dark have these additional peculiar preferences for people’s belief in them.
The assumption of rationality is usually used to get a tractable game. That said, the assumption is not as restrictive as you seem to say. A rational sorcerer isn’t obliged to cooperate with you, and can have other goals as well. For example, in my game we could give Dark a strong desire to move the ace of spades to the top of the deck, and that desire could have a certain weight compared to the desire to stay hidden. In the resulting game, Daisy would still use only the information from the deck, and wouldn’t need to do Bayesian updates based on her own state of mind. Does that answer your question?