Yes, depending on the situation, there may be in intractable discorrelation as you move from the idealization to real-world hazing.
But keep in mind, even if the agents actually were fully correlated (as specified in my phrasing of the Hazing Problem), they could still condemn themselves to perpetual hazing as a result of using a decision theory that returns a different output depending what branch you have learned you are in, and it is this failure that you want to avoid.
There’s a difference between believing that a particular correlation is poor, vs. believing that only outcomes within the current period matter for your decision.
(Side note: this relates to the discussion of the CDT blind spot on page 51 of EY’s TDT paper.)
Yes, depending on the situation, there may be in intractable discorrelation as you move from the idealization to real-world hazing.
But keep in mind, even if the agents actually were fully correlated (as specified in my phrasing of the Hazing Problem), they could still condemn themselves to perpetual hazing as a result of using a decision theory that returns a different output depending what branch you have learned you are in, and it is this failure that you want to avoid.
There’s a difference between believing that a particular correlation is poor, vs. believing that only outcomes within the current period matter for your decision.
(Side note: this relates to the discussion of the CDT blind spot on page 51 of EY’s TDT paper.)