In agent simulates predictor, I am given a proof that I output a certain action, and then I must make a choice. In making this choice, I am determining whether or not I am given that proof in the first place. Further, the proof must in some sense compress my deliberation, or I would not be able to comprehend it. Thus, I feel that there are some details of the proof that are not “true inputs” for me.
I want to say that my deciding what I would do if I saw a proof is “earlier” than the proof according to some generalized notion of causality, or earlier in “logical time.” I want to say that the only way to make the agent-simulates-predictor set-up make sense is to have the full proof itself not be a true input for me. I think that Cartesian frames is a step towards making continuous the notion of inputs and outputs, and so could help our thinking around this problem.
Logical time
In agent simulates predictor, I am given a proof that I output a certain action, and then I must make a choice. In making this choice, I am determining whether or not I am given that proof in the first place. Further, the proof must in some sense compress my deliberation, or I would not be able to comprehend it. Thus, I feel that there are some details of the proof that are not “true inputs” for me.
I want to say that my deciding what I would do if I saw a proof is “earlier” than the proof according to some generalized notion of causality, or earlier in “logical time.” I want to say that the only way to make the agent-simulates-predictor set-up make sense is to have the full proof itself not be a true input for me. I think that Cartesian frames is a step towards making continuous the notion of inputs and outputs, and so could help our thinking around this problem.