Well here’s what gets me. The idea is that you have to create Bob as well, and you had to hypothesize his existence in at least some detail to recognize the issue. If you do not need to contain Bob’s complete definition, then It isn’t any more transparent to me. In this case, we could include worlds with any sufficiently-Bob-like entities that can create you and so play a role in the deal. Should you pre-commit to make a deal with every sufficiently-Bob-like entity? If not, are there sorts of Bob-agents that make the deal favorable? Limiting to these sub-classes, is a world that contains your definition more likely than one that contains a favorable Bob-agent? I’m not sure.
So the root of the issue that I see is this: Your definition is already totally fixed, and if you completely specify Bob, the converse of your statement holds, and the worlds seem to have roughly equal K-complexity. Otherwise, Bob’s definition potentially includes quite a bit of stuff—especially if the only parameters are that Bob is an arbitrary agent that fits the stipulated conditions. The less complete your definition of Bob is, the more general your decision becomes, the more complete your definition of Bob is, the more the complexity balances out.
EDIT: Also, we could extend the problem some more if we consider Bob as an agent that will take into account an anti-You that will create Bob and torture it for all eternity if Bob creates you. If we adjust to that new set of circumstances, the issue I’m raising still seems to hold.
Well here’s what gets me. The idea is that you have to create Bob as well, and you had to hypothesize his existence in at least some detail to recognize the issue. If you do not need to contain Bob’s complete definition, then It isn’t any more transparent to me. In this case, we could include worlds with any sufficiently-Bob-like entities that can create you and so play a role in the deal. Should you pre-commit to make a deal with every sufficiently-Bob-like entity? If not, are there sorts of Bob-agents that make the deal favorable? Limiting to these sub-classes, is a world that contains your definition more likely than one that contains a favorable Bob-agent? I’m not sure.
So the root of the issue that I see is this: Your definition is already totally fixed, and if you completely specify Bob, the converse of your statement holds, and the worlds seem to have roughly equal K-complexity. Otherwise, Bob’s definition potentially includes quite a bit of stuff—especially if the only parameters are that Bob is an arbitrary agent that fits the stipulated conditions. The less complete your definition of Bob is, the more general your decision becomes, the more complete your definition of Bob is, the more the complexity balances out.
EDIT: Also, we could extend the problem some more if we consider Bob as an agent that will take into account an anti-You that will create Bob and torture it for all eternity if Bob creates you. If we adjust to that new set of circumstances, the issue I’m raising still seems to hold.