I think you’re going for the most trivial interpretation instead of trying to explore interesting/unique aspects of the setup. (Not implying any blame. And those “interesting” aspects may not actually exist.) I’m not good at math, but not that bad to not know the most basic 101 idea of multiplying utilities by probabilities.
I’m trying to construct a situation (X) where the normal logic of probability breaks down, because each possibility is embodied by a real person and all those persons are in a conflict with each other.
Maybe it’s impossible to construct such situation, for example because any normal situation can be modeled the same way (different people in different worlds who don’t care about each other or even hate each other). But the possibility of such situation is an interesting topic we could explore.
Here’s another attempt to construct “situation X”:
We have 100 persons.
1 person has 99% chance to get big reward and 1% chance to get nothing. If they drink.
99 persons each have 0.0001% chance to get big punishment and 99.9999% chance to get nothing.
Should a person drink? The answer “yes” is a policy which will always lead to exploiting 99 persons for the sake of 1 person. If all those persons hate each other, their implicit agreement to such policy seems strange.
Here’s an explanation of what I’d like to explore from another angle.
Imagine I have a 99% chance to get reward and 1% chance to get punishment. If I take a pill. I’ll take the pill. If we imagine that each possibility is a separate person, this decision can be interpreted in two ways:
1 person altruistically sacrifices their well-being for the sake of 99 other persons.
100 persons each think, egoistically, “I can get lucky”. Only 1 person is mistaken.
And the same is true for other situations involving probability. But is there any situation (X) which could differentiate between “altruistic” and “egoistic” interpretations?
I think you’re going for the most trivial interpretation instead of trying to explore interesting/unique aspects of the setup. (Not implying any blame. And those “interesting” aspects may not actually exist.) I’m not good at math, but not that bad to not know the most basic 101 idea of multiplying utilities by probabilities.
I’m trying to construct a situation (X) where the normal logic of probability breaks down, because each possibility is embodied by a real person and all those persons are in a conflict with each other.
Maybe it’s impossible to construct such situation, for example because any normal situation can be modeled the same way (different people in different worlds who don’t care about each other or even hate each other). But the possibility of such situation is an interesting topic we could explore.
Here’s another attempt to construct “situation X”:
We have 100 persons.
1 person has 99% chance to get big reward and 1% chance to get nothing. If they drink.
99 persons each have 0.0001% chance to get big punishment and 99.9999% chance to get nothing.
Should a person drink? The answer “yes” is a policy which will always lead to exploiting 99 persons for the sake of 1 person. If all those persons hate each other, their implicit agreement to such policy seems strange.
Here’s an explanation of what I’d like to explore from another angle.
Imagine I have a 99% chance to get reward and 1% chance to get punishment. If I take a pill. I’ll take the pill. If we imagine that each possibility is a separate person, this decision can be interpreted in two ways:
1 person altruistically sacrifices their well-being for the sake of 99 other persons.
100 persons each think, egoistically, “I can get lucky”. Only 1 person is mistaken.
And the same is true for other situations involving probability. But is there any situation (X) which could differentiate between “altruistic” and “egoistic” interpretations?