In terms of expected utility, it is better for “you” (that is, all linked instances of you) to take the gamble, even if the vast majority of light-cones don’t contain simulations.
It is not the case if the money can be utilized in a manner with long term impact.
No it isn’t meaningless: chances simply become operationalised in terms of bets, or other decisions with variable payoff.
This doesn’t give an unambiguous recipe to compute probabilities since it depends on how the results of the bets are accumulated to influence utility. An unambiguous recipe cannot exist since it would have to give precise answers to ambiguous questions such as: if there are two identical simulations of you running on two computers, should they be counted as two copies or one?
Incidentally, in terms of original modal realism (due to David Lewis), “you” are a concrete unique individual who inhabits exactly one world, but it is unknown which one. Other versions of “you” are your “counterparts”. It is usually not possible to group all your counterparts together and treat them as a single (distributed) being, YOU, because the counterpart relation is not an equivalence relation (it doesn’t partition possible people into neat equivalence classes). As one example, imagine a long chain of possible people whose experiences and memories are indistinguishable from immediate neighbours in the chain (and they are counterparts of their neighbours). But there is a cumulative “drift” along the chain, so that the ends are very different from each other (and not counterparts).
UDT doesn’t seem to work this way. In UDT, “you” are not a physical entity but an abstract decision algorithm. This abstract decision algorithm is correlated to different extent with different physical entities in different worlds. This leads to the question of whether some algorithms are more “conscious” than others. I don’t think UDT currently has an answer for this, but neither do other frameworks.
You weren’t born believing in the many worlds interpretation (or in modal realism) and if you are a normal human being you most likely regarded it as quite outlandish at some point. Then some line of evidence or reasoning caused you to shift your opinion (e.g. because it seemed simpler, or overall a better explanation for physical evidence). If it shifted one way, then considering other evidence could shift it back again.
If we think of knowledge as a layered pie, with lower layers corresponding to knowledge which is more “fundamental”, then somewhere near the bottom we have paradigms of reasoning such as Occam’s razor / Solomonoff induction and UDT. Below them lie “human reasoning axioms” which are something we cannot formalize due to our limited introspection ability. In fact the paradigms of reasoning are our current best efforts at formalizing this intuition. However, when we build an AI we need to use something formal, we cannot just transfer our reasoning axioms to it (at least I don’t know how to do it; meseems every way to do it would be “ingenuine” since it would be based on a formalism). So, for the AI, UDT (or whatever formalism we use) is the lowest layer. Maybe it’s a philosophical limitation of any AGI, but I doubt it can be overcome and I doubt it’s a good reason not to build an (F)AI.
It is not the case if the money can be utilized in a manner with long term impact.
OK, I was using $ here as a proxy for utils, but technically you’re right: the bet should be expressed in utils (as for the general definition of a chance that I gave in my comment). Or if you don’t know how to bet in utils, use another proxy which is a consumptive good and can’t be invested (e.g. chocolate bars or vouchers for a cinema trip this week). A final loop-hole is the time discounting: the real versions of you mostly live earlier than the sim versions of you, so perhaps a chocolate bar for the real “you” is worth many chocolate bars for sim “you”s? However we covered that earlier in the thread as well: my understanding is that your effective discount rate is not high enough to outweigh the huge numbers of sims.
An unambiguous recipe cannot exist since it would have to give precise answers to ambiguous questions such as: if there are two identical simulations of you running on two computers, should they be counted as two copies or one?
Well this is your utility function, so you tell me! Imagine a hacker is able to get into the simulations and replace pleasant experiences by horrible torture. Does your utility function care twice as much if he hacks both simulations versus hacking just one of them? (My guess is that it does). And this style of reasoning may cover limit cases like a simulation running on a wafer which is then cut in two (think about whether the sims are independently hackable, and how much you care.)
An unambiguous recipe cannot exist since it would have to give precise answers to ambiguous questions such as: if there are two identical simulations of you running on two computers, should they be counted as two copies or one?
Well this is your utility function, so you tell me! Imagine a hacker is able to get into the simulations and replace pleasant experiences by horrible torture. Does your utility function care twice as much if he hacks both simulations versus hacking just one of them? (My guess is that it does).
It wouldn’t be exactly twice but you’re more or less right. However, it has no direct relation to probability. To see this, imagine you’re a paperclip maximizer. In this case you don’t care about torture or anything of the sort: you only care about paperclips. So your utility function specifies a way of counting paperclips but no way of counting copies of you.
From another angle, imagine your two simulations are offered a bet. How should they count themselves? Obviously it depends on the rules of the bet: whether the payoff is handed out once or twice. Therefore, the counting is ambiguous.
What you’re trying to do is writing the utility function as a convex linear combination of utility functions associated with different copies of you. Once you accomplish that, the coefficients of the combination can be interpreted as probabilities. However, there is no such canonical decomposition.
As one example, imagine a long chain of possible people whose experiences and memories are indistinguishable from immediate neighbours in the chain (and they are counterparts of their neighbours). But there is a cumulative “drift” along the chain, so that the ends are very different from each other (and not counterparts).
UDT doesn’t seem to work this way. In UDT, “you” are not a physical entity but an abstract decision algorithm. This abstract decision algorithm is correlated to different extent with different physical entities in different worlds. This leads to the question of whether some algorithms are more “conscious” than others. I don’t think UDT currently has an answer for this, but neither do other frameworks.
I think it works quite well with “you” as a concrete entity. Simply use the notion that “your” decisions are linked to those of your counterparts (and indeed, to other agents), such that if you decide in a certain way in given circumstances, your counterparts will decide that way as well. The linkage will be very tight for neighbours in the chain, but diminishing gradually with distance, and such that the ends of the chain are not linked at all. This—I think—addresses the problem of trying to identify what algorithm you are implementing, or partitioning possible people into those who are running “the same” algorithm.
Actually I was speaking of a different problem, namely the philosophical problem of which abstract algorithms should be regarded as conscious (assuming the concept makes sense at all).
The identification of oneself’s algorithm is an introspective operation whose definition is not obvious for humans. For AIs the situation is clearer if we assume the AI has access to its own source code.
It is not the case if the money can be utilized in a manner with long term impact.
This doesn’t give an unambiguous recipe to compute probabilities since it depends on how the results of the bets are accumulated to influence utility. An unambiguous recipe cannot exist since it would have to give precise answers to ambiguous questions such as: if there are two identical simulations of you running on two computers, should they be counted as two copies or one?
UDT doesn’t seem to work this way. In UDT, “you” are not a physical entity but an abstract decision algorithm. This abstract decision algorithm is correlated to different extent with different physical entities in different worlds. This leads to the question of whether some algorithms are more “conscious” than others. I don’t think UDT currently has an answer for this, but neither do other frameworks.
If we think of knowledge as a layered pie, with lower layers corresponding to knowledge which is more “fundamental”, then somewhere near the bottom we have paradigms of reasoning such as Occam’s razor / Solomonoff induction and UDT. Below them lie “human reasoning axioms” which are something we cannot formalize due to our limited introspection ability. In fact the paradigms of reasoning are our current best efforts at formalizing this intuition. However, when we build an AI we need to use something formal, we cannot just transfer our reasoning axioms to it (at least I don’t know how to do it; meseems every way to do it would be “ingenuine” since it would be based on a formalism). So, for the AI, UDT (or whatever formalism we use) is the lowest layer. Maybe it’s a philosophical limitation of any AGI, but I doubt it can be overcome and I doubt it’s a good reason not to build an (F)AI.
OK, I was using $ here as a proxy for utils, but technically you’re right: the bet should be expressed in utils (as for the general definition of a chance that I gave in my comment). Or if you don’t know how to bet in utils, use another proxy which is a consumptive good and can’t be invested (e.g. chocolate bars or vouchers for a cinema trip this week). A final loop-hole is the time discounting: the real versions of you mostly live earlier than the sim versions of you, so perhaps a chocolate bar for the real “you” is worth many chocolate bars for sim “you”s? However we covered that earlier in the thread as well: my understanding is that your effective discount rate is not high enough to outweigh the huge numbers of sims.
Well this is your utility function, so you tell me! Imagine a hacker is able to get into the simulations and replace pleasant experiences by horrible torture. Does your utility function care twice as much if he hacks both simulations versus hacking just one of them? (My guess is that it does). And this style of reasoning may cover limit cases like a simulation running on a wafer which is then cut in two (think about whether the sims are independently hackable, and how much you care.)
It wouldn’t be exactly twice but you’re more or less right. However, it has no direct relation to probability. To see this, imagine you’re a paperclip maximizer. In this case you don’t care about torture or anything of the sort: you only care about paperclips. So your utility function specifies a way of counting paperclips but no way of counting copies of you.
From another angle, imagine your two simulations are offered a bet. How should they count themselves? Obviously it depends on the rules of the bet: whether the payoff is handed out once or twice. Therefore, the counting is ambiguous.
What you’re trying to do is writing the utility function as a convex linear combination of utility functions associated with different copies of you. Once you accomplish that, the coefficients of the combination can be interpreted as probabilities. However, there is no such canonical decomposition.
I think it works quite well with “you” as a concrete entity. Simply use the notion that “your” decisions are linked to those of your counterparts (and indeed, to other agents), such that if you decide in a certain way in given circumstances, your counterparts will decide that way as well. The linkage will be very tight for neighbours in the chain, but diminishing gradually with distance, and such that the ends of the chain are not linked at all. This—I think—addresses the problem of trying to identify what algorithm you are implementing, or partitioning possible people into those who are running “the same” algorithm.
Actually I was speaking of a different problem, namely the philosophical problem of which abstract algorithms should be regarded as conscious (assuming the concept makes sense at all).
The identification of oneself’s algorithm is an introspective operation whose definition is not obvious for humans. For AIs the situation is clearer if we assume the AI has access to its own source code.