It’s not about complexity, it is just expected total gain. Simply the second calculation of the OP.
I just argued, that the second calculation is right and that is what the agents should do in general. (unless they are completely egoistic for their special copies)
This was a simple situation. I’m suggesting a ‘big picture’ idea for the general case.
According to Wei Dei and Nesov above, the anthropic-like puzzles can be re-interpreted as ‘agent co-ordination’ problems (multiple agents trying to coordinate their decision making). And you seemed to have a similiar interpretation. Am I right?
If Dei and Nesov’s interpretation is right, it seems the puzzles could be reinterpreted as being about groups of agents tring to agree in advance about a ‘decision making protocol’.
But now I ask is this not equivalent to trying to find a ‘communication protocol’ which enables them to best coordinate their decision making? And rather than trying to directly calculate the results of every possible protocol (which would be impractical for all but simple problems), I was suggesting trying to use information theory to apply a complexity measure to protocols, in order to rank them.
Indeed I ask whether this is actually the correct way to interpret Occam’s Razor/Complexity Priors? i.e, My suggestion is to re-interpret Occam/Priors as referring to copies of agents trying to co-ordinate their decision making using some communication protocol, such that they seek to minimize the complexity of this protocol.
It’s not about complexity, it is just expected total gain. Simply the second calculation of the OP.
I just argued, that the second calculation is right and that is what the agents should do in general. (unless they are completely egoistic for their special copies)
This was a simple situation. I’m suggesting a ‘big picture’ idea for the general case.
According to Wei Dei and Nesov above, the anthropic-like puzzles can be re-interpreted as ‘agent co-ordination’ problems (multiple agents trying to coordinate their decision making). And you seemed to have a similiar interpretation. Am I right?
If Dei and Nesov’s interpretation is right, it seems the puzzles could be reinterpreted as being about groups of agents tring to agree in advance about a ‘decision making protocol’.
But now I ask is this not equivalent to trying to find a ‘communication protocol’ which enables them to best coordinate their decision making? And rather than trying to directly calculate the results of every possible protocol (which would be impractical for all but simple problems), I was suggesting trying to use information theory to apply a complexity measure to protocols, in order to rank them.
Indeed I ask whether this is actually the correct way to interpret Occam’s Razor/Complexity Priors? i.e, My suggestion is to re-interpret Occam/Priors as referring to copies of agents trying to co-ordinate their decision making using some communication protocol, such that they seek to minimize the complexity of this protocol.