I wait until there are so many utiltons in the box that I can use them to get two identical boxes and have some utiltions left over. Every time a box has more than enough utilitons to make two identical boxes, I repeat that step. Any utilitons not used to make new boxes are the dividend of the investment.
Now that you mention that, that’s true, and it gives me several other weird ideas. The box gives you tokens that you exchange for utilons, which seem like they are supposed to be defined as “Whatever you want/define them to be, based on your values.”
Ergo, imagine a Happy Michaelos that gets about twice as much positive utilons from everything compared to Sad Michaelos. Sad Michaelos gets twice as much NEGATIVE utilons from everything compared to Happy Michaelos.
Let’s say a cookie grants Happy Michaelos 1 utilons. It would take two cookies to grant Sad Michaelos 1 utilons.
Let’s say a stubbed toe grants Sad Michaelos −1 utilons. It would take two stubbed toes to grant Happy Michaelos −1 utilons.
So if Happy Michaelos or Sad Michaelos gets to open the box and they are friends who substantially share utility and cookies… It should be Sad Michaelos who does so (both will get more cookies that way.)
As far as I can tell, this is a reasonable interpretation of the box.
So, I should probably figure out how the people below would work, since they are increasingly unreasonable interpretations of the box:
Extremely Sad Michaelos:
Is essentially 1 million times worse off than Sad Michaelos. Ergo, it the logic above holds, Extremely Sad Michaelos gets 2 million cookies from turning in a single token.
Hyper Pout Michaelos:
Is essentially 1 billion times worse off than Sad Michaelos. He also has a note in his utility function that he will receive -infinity(aleph 0) utilons if he does not change his utility function back to Sad Michaelos’s utility function within 1 second after the box is powerless and he has converted all of his tokens. If the logic above holds, Hyper Pout Michaelos gets 1 billion times more cookies than Sad Michaelos, and then gets to enjoy substantially more utilons from them!
Omnidespairing Michaelos:
Is almost impossible to grant utilons to. The certainty of omnipotence grants him 1 utilon. Everything else that might be positive (say, a 99% chance of omnipotence) grants him 0 utilons.
This is a coherent utility function. You can even live and have a normal life with it if you also want to avoid negative utilons (eating might only grant -infinite (aleph 0) utilons and not eating might grant -infinite (aleph 1) utilons.
Box Cynical Despairmax Michaelos:
Gets some aleph of negative infinite utilons from every decision whatsoever. Again, he can make decisions and go throughout the day, but any number of the tokens that the box grants don’t seem to map to anything relevant on his utility function. For instance, waiting a day might cost him—infinite (aleph 2) utilons. Adding a finite number of utilons is irrelevant. He immediately opens the box so he can discard the useless tokens and get back to avoiding the incomprehensible horrors of life, and this is (as far as I can tell) a correct answer for him.
It seems like at least some of the utility functions above cheat the box, but I’m not sure which ones go to far, if the sample is reasonable. They all give entirely different answers as well:
1: Go through life as sad as possible.
2: Go through life pretending to be sad to get more and then actually be happy later.
3: Only omnipotence will make you truly happy. Anything else is an endless horror.
4: Life is pain, and the box is trying to sell you something useless, ignore it and move on.
I wait until there are so many utiltons in the box that I can use them to get two identical boxes and have some utiltions left over. Every time a box has more than enough utilitons to make two identical boxes, I repeat that step. Any utilitons not used to make new boxes are the dividend of the investment.
Now that you mention that, that’s true, and it gives me several other weird ideas. The box gives you tokens that you exchange for utilons, which seem like they are supposed to be defined as “Whatever you want/define them to be, based on your values.”
Ergo, imagine a Happy Michaelos that gets about twice as much positive utilons from everything compared to Sad Michaelos. Sad Michaelos gets twice as much NEGATIVE utilons from everything compared to Happy Michaelos.
Let’s say a cookie grants Happy Michaelos 1 utilons. It would take two cookies to grant Sad Michaelos 1 utilons. Let’s say a stubbed toe grants Sad Michaelos −1 utilons. It would take two stubbed toes to grant Happy Michaelos −1 utilons.
So if Happy Michaelos or Sad Michaelos gets to open the box and they are friends who substantially share utility and cookies… It should be Sad Michaelos who does so (both will get more cookies that way.)
As far as I can tell, this is a reasonable interpretation of the box.
So, I should probably figure out how the people below would work, since they are increasingly unreasonable interpretations of the box:
Extremely Sad Michaelos:
Is essentially 1 million times worse off than Sad Michaelos. Ergo, it the logic above holds, Extremely Sad Michaelos gets 2 million cookies from turning in a single token.
Hyper Pout Michaelos:
Is essentially 1 billion times worse off than Sad Michaelos. He also has a note in his utility function that he will receive -infinity(aleph 0) utilons if he does not change his utility function back to Sad Michaelos’s utility function within 1 second after the box is powerless and he has converted all of his tokens. If the logic above holds, Hyper Pout Michaelos gets 1 billion times more cookies than Sad Michaelos, and then gets to enjoy substantially more utilons from them!
Omnidespairing Michaelos:
Is almost impossible to grant utilons to. The certainty of omnipotence grants him 1 utilon. Everything else that might be positive (say, a 99% chance of omnipotence) grants him 0 utilons.
This is a coherent utility function. You can even live and have a normal life with it if you also want to avoid negative utilons (eating might only grant -infinite (aleph 0) utilons and not eating might grant -infinite (aleph 1) utilons.
Box Cynical Despairmax Michaelos:
Gets some aleph of negative infinite utilons from every decision whatsoever. Again, he can make decisions and go throughout the day, but any number of the tokens that the box grants don’t seem to map to anything relevant on his utility function. For instance, waiting a day might cost him—infinite (aleph 2) utilons. Adding a finite number of utilons is irrelevant. He immediately opens the box so he can discard the useless tokens and get back to avoiding the incomprehensible horrors of life, and this is (as far as I can tell) a correct answer for him.
It seems like at least some of the utility functions above cheat the box, but I’m not sure which ones go to far, if the sample is reasonable. They all give entirely different answers as well:
1: Go through life as sad as possible.
2: Go through life pretending to be sad to get more and then actually be happy later.
3: Only omnipotence will make you truly happy. Anything else is an endless horror.
4: Life is pain, and the box is trying to sell you something useless, ignore it and move on.
If changing my utility function has expected positive results, based both on my current utility function and in the proposed change, then…
Here the problem is that the utilon is not a unit that can be converted into any other unit, including physical phenomena.