My reason for caring about internal computational states is: In the twin prisoners dilemma[1], I cooperate because we’re the same algorithm. If we modify the twin to have a slightly longer right index-finger-nail, I would still cooperate, even though they’re a different algorithm, but little enough has been changed about the algorithm that the internal states that they’re still similar enough.
But it could be that I’m in a prisoner’s dilemma with some program p⋆ that, given some inputs, returns the same outputs as I do, but for completely different “reasons”—that is, the internal states are very different, and a slight change in input would cause the output to be radically different. My logical correlation with p⋆ is pretty small, because, even though it gives the same output, it gives that output for very different reasons, so I don’t have much control over its outputs by controlling my own computations.
If you want to use it for ECL, then it’s not clear to me why internal computational states would matter.
My reason for caring about internal computational states is: In the twin prisoners dilemma[1], I cooperate because we’re the same algorithm. If we modify the twin to have a slightly longer right index-finger-nail, I would still cooperate, even though they’re a different algorithm, but little enough has been changed about the algorithm that the internal states that they’re still similar enough.
But it could be that I’m in a prisoner’s dilemma with some program p⋆ that, given some inputs, returns the same outputs as I do, but for completely different “reasons”—that is, the internal states are very different, and a slight change in input would cause the output to be radically different. My logical correlation with p⋆ is pretty small, because, even though it gives the same output, it gives that output for very different reasons, so I don’t have much control over its outputs by controlling my own computations.
At least, that’s how I understand it.
Is this actually ECL, or just acausal trade?