Whenever people have written/talked about ECL, a common thing I’ve read/heard was that “of course, this depends on us finding some way of saying that one decision algorithm is similar/dissimilar to another one, since we’re not going to encounter the case of perfect copies very often”. This was at least the case when I last asked Oesterheld about this, but I haven’t read Treutlein 2023 closely enough yet to figure out whether he has a satisfying solution.
The fact we didn’t have a characterization of logical correlation bugged me and was in the back of my mind, since it felt like a problem that one could make progress on. Today in the shower I was thinking about this, and the post above is what came of it.
(I also have the suspicion that having a notion of “these two programs produce the same/similar outputs in a similar way” might be handy in general.)
My reason for caring about internal computational states is: In the twin prisoners dilemma[1], I cooperate because we’re the same algorithm. If we modify the twin to have a slightly longer right index-finger-nail, I would still cooperate, even though they’re a different algorithm, but little enough has been changed about the algorithm that the internal states that they’re still similar enough.
But it could be that I’m in a prisoner’s dilemma with some program p⋆ that, given some inputs, returns the same outputs as I do, but for completely different “reasons”—that is, the internal states are very different, and a slight change in input would cause the output to be radically different. My logical correlation with p⋆ is pretty small, because, even though it gives the same output, it gives that output for very different reasons, so I don’t have much control over its outputs by controlling my own computations.
Could you say more about the motivation here ?
Whenever people have written/talked about ECL, a common thing I’ve read/heard was that “of course, this depends on us finding some way of saying that one decision algorithm is similar/dissimilar to another one, since we’re not going to encounter the case of perfect copies very often”. This was at least the case when I last asked Oesterheld about this, but I haven’t read Treutlein 2023 closely enough yet to figure out whether he has a satisfying solution.
The fact we didn’t have a characterization of logical correlation bugged me and was in the back of my mind, since it felt like a problem that one could make progress on. Today in the shower I was thinking about this, and the post above is what came of it.
(I also have the suspicion that having a notion of “these two programs produce the same/similar outputs in a similar way” might be handy in general.)
If you want to use it for ECL, then it’s not clear to me why internal computational states would matter.
My reason for caring about internal computational states is: In the twin prisoners dilemma[1], I cooperate because we’re the same algorithm. If we modify the twin to have a slightly longer right index-finger-nail, I would still cooperate, even though they’re a different algorithm, but little enough has been changed about the algorithm that the internal states that they’re still similar enough.
But it could be that I’m in a prisoner’s dilemma with some program p⋆ that, given some inputs, returns the same outputs as I do, but for completely different “reasons”—that is, the internal states are very different, and a slight change in input would cause the output to be radically different. My logical correlation with p⋆ is pretty small, because, even though it gives the same output, it gives that output for very different reasons, so I don’t have much control over its outputs by controlling my own computations.
At least, that’s how I understand it.
Is this actually ECL, or just acausal trade?