If I am actually immortal, and there is no other way to get Utilions then each day, the value of me opening the box is something like:
Value=Utilions/Future Days
The expected value of opening the box is:
Value=Utilons
That is all. That number already represents how much value is assigned to the state of the universe given that decision. Dividing by only future days is an error. Assigning a different value to the specified reward based on whether days are in the past or the future changes the problem.
Presumably, if Utilions are useful at all, then you use them. Usually, this means that some are lost each day in the process of using them.
Further, unless the Utilions represent some resource that is non-entropic, then I will lose some number of Utilions each day even if they aren’t lost by me using them. This works out to the same answer in the long run.
Let’s assume we have an agent Boxxy, an immortal AI whose utility function is that opening the box tomorrow is twice as good as opening it today. Once he opens the box, his utility function assigns that much value to the universe. Let’s assume this is all he values. (This gets us around a number of problems for the scenario.)
Even in this scenario, unless Boxxy is immune to entropy, some amount of information (and thus, some perception of utility) will be lost over time. Over a long enough time, Boxxy will eventually lose the memory of opening the Box. Even if Boxxy is capable of self-repair in the face of entropy, unless Boxxy is capable of actually not undergoing entropy, some of the Box-information will be lost. (Maybe Boxxy hopes that it can replace it with an identical memory for its utility function, although I would suspect at that point Boxxy might just to decide to remember having opened the Box at a nearer future date) Eventually, Boxxy’s memory and thus, Boxxy’s Utilions, will either be completely artifiical with at best something like a causal relationship to previous memory states of opening the box, or Boxxy will lose all of its Utilions.
Of course, Boxxy might never open the box. (I am not a superintelligence obsessed with box opening. I am a human intelligence obsessed with things that Boxxy would find irrelevant. So I can only guess as to what a box-based AGI would do.) In this case, the Utilions won’t degrade, but Boxxy can still expect a value of 0 in this case.
Frankly, the problem is hard to think about at that level, because real immortality (as the problem requires) would require someway to ensure that entropy doesn’t occur but somehow some sort of process occurs, which seems a contradiction in terms. I guess this could be occuring in a universe without entropy, (but which somehow has other processes) although both my intuitions and my knowledge are so firmly rooted in a universe that has entropy that I don’t have a good grounding on how to evaluate problems in such a universe.
The expected value of opening the box is:
Value=Utilons
That is all. That number already represents how much value is assigned to the state of the universe given that decision. Dividing by only future days is an error. Assigning a different value to the specified reward based on whether days are in the past or the future changes the problem.
Presumably, if Utilions are useful at all, then you use them. Usually, this means that some are lost each day in the process of using them.
Further, unless the Utilions represent some resource that is non-entropic, then I will lose some number of Utilions each day even if they aren’t lost by me using them. This works out to the same answer in the long run.
Let’s assume we have an agent Boxxy, an immortal AI whose utility function is that opening the box tomorrow is twice as good as opening it today. Once he opens the box, his utility function assigns that much value to the universe. Let’s assume this is all he values. (This gets us around a number of problems for the scenario.)
Even in this scenario, unless Boxxy is immune to entropy, some amount of information (and thus, some perception of utility) will be lost over time. Over a long enough time, Boxxy will eventually lose the memory of opening the Box. Even if Boxxy is capable of self-repair in the face of entropy, unless Boxxy is capable of actually not undergoing entropy, some of the Box-information will be lost. (Maybe Boxxy hopes that it can replace it with an identical memory for its utility function, although I would suspect at that point Boxxy might just to decide to remember having opened the Box at a nearer future date) Eventually, Boxxy’s memory and thus, Boxxy’s Utilions, will either be completely artifiical with at best something like a causal relationship to previous memory states of opening the box, or Boxxy will lose all of its Utilions.
Of course, Boxxy might never open the box. (I am not a superintelligence obsessed with box opening. I am a human intelligence obsessed with things that Boxxy would find irrelevant. So I can only guess as to what a box-based AGI would do.) In this case, the Utilions won’t degrade, but Boxxy can still expect a value of 0 in this case.
Frankly, the problem is hard to think about at that level, because real immortality (as the problem requires) would require someway to ensure that entropy doesn’t occur but somehow some sort of process occurs, which seems a contradiction in terms. I guess this could be occuring in a universe without entropy, (but which somehow has other processes) although both my intuitions and my knowledge are so firmly rooted in a universe that has entropy that I don’t have a good grounding on how to evaluate problems in such a universe.
No. Those are resources.