I’m guessing that “strongly logically correlated in terms of outputs” means that it has the same outputs for a large fraction of inputs (but not all) according to some measure over the space of all possible inputs.
If that’s all you know, then there will likely be nearly zero logical correlation between your outputs for this instance, and what you will decide depends upon what your decision algorithm does when there is close to zero logical correlation.
If you have more specific information than just existence of a strong logical correlation in general, then you should use it. For example, you may be told that the measure over which the correlation is taken is heavily weighted toward your specific inputs for this instance, and that the other player is given the same inputs. That raises the logical correlation between outputs for this instance, and (if your decision algorithm depends upon such things) you should cooperate.
I’m guessing that “strongly logically correlated in terms of outputs” means that it has the same outputs for a large fraction of inputs (but not all) according to some measure over the space of all possible inputs.
If that’s all you know, then there will likely be nearly zero logical correlation between your outputs for this instance, and what you will decide depends upon what your decision algorithm does when there is close to zero logical correlation.
If you have more specific information than just existence of a strong logical correlation in general, then you should use it. For example, you may be told that the measure over which the correlation is taken is heavily weighted toward your specific inputs for this instance, and that the other player is given the same inputs. That raises the logical correlation between outputs for this instance, and (if your decision algorithm depends upon such things) you should cooperate.