Hmm. I was thinking that determinism requires that you get the same output in the same situation, but I guess I was not accounting for the fact that we do not require the two nodes in the information set to be the same situation, we only require that they are indistinguishable to the agent.
It does seem realistic to have the absent minded driver flip a coin. (although perhaps it is better to model that as a third option of flipping a coin, which points to chance node.)
On the other hand, If I am a deterministic Turing machine, and Omega simulates me and puts a dollar in whichever of two boxes he predicts I will not pick, then I cannot win this game unless I have an outside source of randomness.
It seems like in different situations, you want different models. It seems to me like you have two different types of agents: a deterministic dUDT agent and a randomized rUDT agent. We should be looking at both, because they are not the same. I also do not know which one I am as a human.
By asking about the Absent-Minded Driver with a coin, you were phrasing the problem so that it does not matter, because an rUDT agent is just a dUDT agent which has access to a fair coin that he can flip any number of times at no cost.
I agree that there is a difference, and I don’t know which model describes humans better. It doesn’t seem to matter much in any of our toy problems though, apart from AMD where we really want randomness. So I think I’m going to keep the post as is, with the understanding that you can remove randomness from the model if you really want to.
I agree that that is a good solution. Since adding randomness to a node is something that can be done in a formulaic way, it makes sense to have information sets which are just labeled as “you can use behavioral strategies here” It also makes sense to have them labeled as such by default.
I do not think that agents wanting but not having randomness is any more pathological than Newcomb’s problem (Although that is already pretty pathological)
Hmm. I was thinking that determinism requires that you get the same output in the same situation, but I guess I was not accounting for the fact that we do not require the two nodes in the information set to be the same situation, we only require that they are indistinguishable to the agent.
It does seem realistic to have the absent minded driver flip a coin. (although perhaps it is better to model that as a third option of flipping a coin, which points to chance node.)
On the other hand, If I am a deterministic Turing machine, and Omega simulates me and puts a dollar in whichever of two boxes he predicts I will not pick, then I cannot win this game unless I have an outside source of randomness.
It seems like in different situations, you want different models. It seems to me like you have two different types of agents: a deterministic dUDT agent and a randomized rUDT agent. We should be looking at both, because they are not the same. I also do not know which one I am as a human.
By asking about the Absent-Minded Driver with a coin, you were phrasing the problem so that it does not matter, because an rUDT agent is just a dUDT agent which has access to a fair coin that he can flip any number of times at no cost.
I agree that there is a difference, and I don’t know which model describes humans better. It doesn’t seem to matter much in any of our toy problems though, apart from AMD where we really want randomness. So I think I’m going to keep the post as is, with the understanding that you can remove randomness from the model if you really want to.
I agree that that is a good solution. Since adding randomness to a node is something that can be done in a formulaic way, it makes sense to have information sets which are just labeled as “you can use behavioral strategies here” It also makes sense to have them labeled as such by default.
I do not think that agents wanting but not having randomness is any more pathological than Newcomb’s problem (Although that is already pretty pathological)