I’m not sure I completely understand this, so Instead of trying to think about this directly I’m going to try to formalize it and hope that (right or wrong) my attempt helps with clarification. Here goes:
Agent A generates a hypothesis about an agent, B, which is analogous to Bob. B will generate a copy of A in any universe that agent B occupies iff agent A isn’t there already andA would do the same. Agent B lowers the daily expected utility for agent A by X. Agent A learns that it has the option to make agent B, should A have pre-committed to B’s deal?
Let Y be the daily expected utility without B. Then Y—X = EU post-B. The utility to agent A in a non-B-containing world is
)
Where d(i) is a time dependent discount factor (possibly equal to 1) and t is the lifespan of the agent in days. Obviously, if
XgeqY
the agent should not have pre-committed (and if X is negative or 0 the agent should/might-as-well pre-commit, but then B would not be a jerk).
Otherwise, pre-commitment seems to depend on multiple factors. A wants to maximize its sum utility over possible worlds, but I’m not clear on how this calculation would actually be made.
Just off the top of my head, if A pre-commits, every world in which A exists and B does not, but A has the ability to generate B will drop from a daily utility of Y, to one of Y—X. On the other hand, every world in which B exists but A does not, but B can create A goes from 0 to Y -X utility. Let’s assume a finite and equal number of both sorts of worlds for simplicity. Then pairing up each type of world, we go from an average daily utility Y/2 to Y-X. So we would probably at least want it to be the case that:
so
Ygeq2X
So then the tentative answer would be “it depends on how much of a jerk Bob really is”. The rule of thumb from this would indicate that you should only pre-commit if Bob reduces your daily expected utility by less than half. This was under the assumption that we could just “average out” the worlds where the roles are reversed. Maybe this could be refined some with some sort of K-complexity consideration, but I can’t think of any obvious way to do that (that actually leads to a concrete calculation anyway).
Also, this isn’t quite like the Prometheus situation, since Bob is not always your creator. Presumably you’re in a world where Bob doesn’t exist, otherwise you wouldn’t have any obligation to use the Bob-maker Omega dropped off even if you did pre-commit. So I don’t think the same reasoning applies here.
An essential part of who Bob the Jerk is is that he was created by you, with some help from Omega. He can’t exist in a universe where you don’t, so the hypothetical bargain he offered you isn’t logically coherent.
I don’t see how this can hold. Since we’re reasoning over all possible computable universes in UDT, if Bob can be partially simulated by your brain, a more fleshed out version (fitting the stipulated parameters) should exist in some possible worlds
Maybe this could be refined some with some sort of K-complexity consideration, but I can’t think of any obvious way to do that (that actually leads to a concrete calculation anyway).
It certainly needs to be refined, because if I live in thousand universes and Bob in one, I would be decreasing my utility in thousand universes in exchange for additional utility in one.
I can’t make an exact calculation, but it seems obvious to me that my existence has much greater prior probability than Bob’s, because Bob’s definition contains my definition—I only care about those Bobs who analyze my algorithm, and create me if I create them. I would guess, though I cannot prove it formally, that compared to my existence, his existence is epsilon, therefore I should ignore him.
(If this helps you, imagine a hypothetical Anti-Bob that will create you if you don’t create Bob; or he will create you and torture you for eternity if you create Bob. If we treat Bob seriously, we should treat Anti-Bob seriously too. Although, honestly, this Anti-Bob is even less probable than Bob.)
Well here’s what gets me. The idea is that you have to create Bob as well, and you had to hypothesize his existence in at least some detail to recognize the issue. If you do not need to contain Bob’s complete definition, then It isn’t any more transparent to me. In this case, we could include worlds with any sufficiently-Bob-like entities that can create you and so play a role in the deal. Should you pre-commit to make a deal with every sufficiently-Bob-like entity? If not, are there sorts of Bob-agents that make the deal favorable? Limiting to these sub-classes, is a world that contains your definition more likely than one that contains a favorable Bob-agent? I’m not sure.
So the root of the issue that I see is this: Your definition is already totally fixed, and if you completely specify Bob, the converse of your statement holds, and the worlds seem to have roughly equal K-complexity. Otherwise, Bob’s definition potentially includes quite a bit of stuff—especially if the only parameters are that Bob is an arbitrary agent that fits the stipulated conditions. The less complete your definition of Bob is, the more general your decision becomes, the more complete your definition of Bob is, the more the complexity balances out.
EDIT: Also, we could extend the problem some more if we consider Bob as an agent that will take into account an anti-You that will create Bob and torture it for all eternity if Bob creates you. If we adjust to that new set of circumstances, the issue I’m raising still seems to hold.
I’m not sure I completely understand this, so Instead of trying to think about this directly I’m going to try to formalize it and hope that (right or wrong) my attempt helps with clarification. Here goes:
Agent A generates a hypothesis about an agent, B, which is analogous to Bob. B will generate a copy of A in any universe that agent B occupies iff agent A isn’t there already and A would do the same. Agent B lowers the daily expected utility for agent A by X. Agent A learns that it has the option to make agent B, should A have pre-committed to B’s deal?
Let Y be the daily expected utility without B. Then Y—X = EU post-B. The utility to agent A in a non-B-containing world is
)Where d(i) is a time dependent discount factor (possibly equal to 1) and t is the lifespan of the agent in days. Obviously, if XgeqY the agent should not have pre-committed (and if X is negative or 0 the agent should/might-as-well pre-commit, but then B would not be a jerk).
Otherwise, pre-commitment seems to depend on multiple factors. A wants to maximize its sum utility over possible worlds, but I’m not clear on how this calculation would actually be made.
Just off the top of my head, if A pre-commits, every world in which A exists and B does not, but A has the ability to generate B will drop from a daily utility of Y, to one of Y—X. On the other hand, every world in which B exists but A does not, but B can create A goes from 0 to Y -X utility. Let’s assume a finite and equal number of both sorts of worlds for simplicity. Then pairing up each type of world, we go from an average daily utility Y/2 to Y-X. So we would probably at least want it to be the case that:
so Ygeq2XSo then the tentative answer would be “it depends on how much of a jerk Bob really is”. The rule of thumb from this would indicate that you should only pre-commit if Bob reduces your daily expected utility by less than half. This was under the assumption that we could just “average out” the worlds where the roles are reversed. Maybe this could be refined some with some sort of K-complexity consideration, but I can’t think of any obvious way to do that (that actually leads to a concrete calculation anyway).
Also, this isn’t quite like the Prometheus situation, since Bob is not always your creator. Presumably you’re in a world where Bob doesn’t exist, otherwise you wouldn’t have any obligation to use the Bob-maker Omega dropped off even if you did pre-commit. So I don’t think the same reasoning applies here.
I don’t see how this can hold. Since we’re reasoning over all possible computable universes in UDT, if Bob can be partially simulated by your brain, a more fleshed out version (fitting the stipulated parameters) should exist in some possible worlds
Alright, well that’s what I’ve thought of so far.
It certainly needs to be refined, because if I live in thousand universes and Bob in one, I would be decreasing my utility in thousand universes in exchange for additional utility in one.
I can’t make an exact calculation, but it seems obvious to me that my existence has much greater prior probability than Bob’s, because Bob’s definition contains my definition—I only care about those Bobs who analyze my algorithm, and create me if I create them. I would guess, though I cannot prove it formally, that compared to my existence, his existence is epsilon, therefore I should ignore him.
(If this helps you, imagine a hypothetical Anti-Bob that will create you if you don’t create Bob; or he will create you and torture you for eternity if you create Bob. If we treat Bob seriously, we should treat Anti-Bob seriously too. Although, honestly, this Anti-Bob is even less probable than Bob.)
Well here’s what gets me. The idea is that you have to create Bob as well, and you had to hypothesize his existence in at least some detail to recognize the issue. If you do not need to contain Bob’s complete definition, then It isn’t any more transparent to me. In this case, we could include worlds with any sufficiently-Bob-like entities that can create you and so play a role in the deal. Should you pre-commit to make a deal with every sufficiently-Bob-like entity? If not, are there sorts of Bob-agents that make the deal favorable? Limiting to these sub-classes, is a world that contains your definition more likely than one that contains a favorable Bob-agent? I’m not sure.
So the root of the issue that I see is this: Your definition is already totally fixed, and if you completely specify Bob, the converse of your statement holds, and the worlds seem to have roughly equal K-complexity. Otherwise, Bob’s definition potentially includes quite a bit of stuff—especially if the only parameters are that Bob is an arbitrary agent that fits the stipulated conditions. The less complete your definition of Bob is, the more general your decision becomes, the more complete your definition of Bob is, the more the complexity balances out.
EDIT: Also, we could extend the problem some more if we consider Bob as an agent that will take into account an anti-You that will create Bob and torture it for all eternity if Bob creates you. If we adjust to that new set of circumstances, the issue I’m raising still seems to hold.