For all intents and purposes it’s equivalent to say “you have only one shot” and after memory erasure it’s not you anymore, but a person equivalent to other version of you next room.
Let’s assume “it’s not you anymore” is false. At least for a moment (even if it goes against LDT or something else).
Yes, you have a 0.1 chance of being punished. But who cares if they will erase your memory anyway.
Okay, let’s imagine that you doing that experiment for 9999999 times, and then you get back all your memories.
You still better drink. Probablities don’t change. Yes, if you are consistent with your choice (which you should be) - you have a 0.1 probability of being punished again and again and again. Also you have a 0.9 probability of being rewarded again and again and again.
Of course that seems counterintuitive, because in real life a perspective of “infinite punishment” (or nearly infinite punishment) is usually something to be avoided at all costs, even if you don’t get reward. That’s because in real life your utility scales highly non-linearly, and even if single punishment and single reward have equal utility measure − 9999999 punishments in a row is a larger utility loss than a utility gain from 9999999 rewards.
Also in real life you don’t lose your memory every 5 seconds and have a chance to learn on your mistakes.
But if we talking about spherical decision theory in a vacuum—you should drink.
I think you’re going for the most trivial interpretation instead of trying to explore interesting/unique aspects of the setup. (Not implying any blame. And those “interesting” aspects may not actually exist.) I’m not good at math, but not that bad to not know the most basic 101 idea of multiplying utilities by probabilities.
I’m trying to construct a situation (X) where the normal logic of probability breaks down, because each possibility is embodied by a real person and all those persons are in a conflict with each other.
Maybe it’s impossible to construct such situation, for example because any normal situation can be modeled the same way (different people in different worlds who don’t care about each other or even hate each other). But the possibility of such situation is an interesting topic we could explore.
Here’s another attempt to construct “situation X”:
We have 100 persons.
1 person has 99% chance to get big reward and 1% chance to get nothing. If they drink.
99 persons each have 0.0001% chance to get big punishment and 99.9999% chance to get nothing.
Should a person drink? The answer “yes” is a policy which will always lead to exploiting 99 persons for the sake of 1 person. If all those persons hate each other, their implicit agreement to such policy seems strange.
Here’s an explanation of what I’d like to explore from another angle.
Imagine I have a 99% chance to get reward and 1% chance to get punishment. If I take a pill. I’ll take the pill. If we imagine that each possibility is a separate person, this decision can be interpreted in two ways:
1 person altruistically sacrifices their well-being for the sake of 99 other persons.
100 persons each think, egoistically, “I can get lucky”. Only 1 person is mistaken.
And the same is true for other situations involving probability. But is there any situation (X) which could differentiate between “altruistic” and “egoistic” interpretations?
Let’s assume “it’s not you anymore” is false. At least for a moment (even if it goes against LDT or something else).
Let’s assume that the persons do care.
Okay, let’s imagine that you doing that experiment for 9999999 times, and then you get back all your memories.
You still better drink. Probablities don’t change. Yes, if you are consistent with your choice (which you should be) - you have a 0.1 probability of being punished again and again and again. Also you have a 0.9 probability of being rewarded again and again and again.
Of course that seems counterintuitive, because in real life a perspective of “infinite punishment” (or nearly infinite punishment) is usually something to be avoided at all costs, even if you don’t get reward. That’s because in real life your utility scales highly non-linearly, and even if single punishment and single reward have equal utility measure − 9999999 punishments in a row is a larger utility loss than a utility gain from 9999999 rewards.
Also in real life you don’t lose your memory every 5 seconds and have a chance to learn on your mistakes.
But if we talking about spherical decision theory in a vacuum—you should drink.
I think you’re going for the most trivial interpretation instead of trying to explore interesting/unique aspects of the setup. (Not implying any blame. And those “interesting” aspects may not actually exist.) I’m not good at math, but not that bad to not know the most basic 101 idea of multiplying utilities by probabilities.
I’m trying to construct a situation (X) where the normal logic of probability breaks down, because each possibility is embodied by a real person and all those persons are in a conflict with each other.
Maybe it’s impossible to construct such situation, for example because any normal situation can be modeled the same way (different people in different worlds who don’t care about each other or even hate each other). But the possibility of such situation is an interesting topic we could explore.
Here’s another attempt to construct “situation X”:
We have 100 persons.
1 person has 99% chance to get big reward and 1% chance to get nothing. If they drink.
99 persons each have 0.0001% chance to get big punishment and 99.9999% chance to get nothing.
Should a person drink? The answer “yes” is a policy which will always lead to exploiting 99 persons for the sake of 1 person. If all those persons hate each other, their implicit agreement to such policy seems strange.
Here’s an explanation of what I’d like to explore from another angle.
Imagine I have a 99% chance to get reward and 1% chance to get punishment. If I take a pill. I’ll take the pill. If we imagine that each possibility is a separate person, this decision can be interpreted in two ways:
1 person altruistically sacrifices their well-being for the sake of 99 other persons.
100 persons each think, egoistically, “I can get lucky”. Only 1 person is mistaken.
And the same is true for other situations involving probability. But is there any situation (X) which could differentiate between “altruistic” and “egoistic” interpretations?