A stupid question about anthropics and [logical] decision theories. Could we “disprove” some types of anthropic reasoning based on [logical] consistency? I struggle with math, so please keep the replies relatively simple.
Imagine 100 versions of me, I’m one of them. We’re all egoists, each one of us doesn’t care about the others.
We’re in isolated rooms, each room has a drink. 90 drinks are rewards, 10 drink are punishments. Everyone is given the choice to drink or not to drink.
The setup is iterated (with memory erasure), everyone gets the same type of drink each time. If you got the reward, you get the reward each time. Only you can’t remember that.
If I reason myself into drinking (reasoning that I have a 90% chance of reward), from the outside it would look as if 10 egoists have agreed (very conveniently, to the benefit of others) to suffer again and again… is it a consistent possibility?
I guess you’ve made it more confusing than it needs to be by introducing memory erasure to this setup. For all intents and purposes it’s equivalent to say “you have only one shot” and after memory erasure it’s not you anymore, but a person equivalent to other version of you next room.
So what we got is many different people in different spacetime boxes, with only one shot, and yes, you should drink. Yes, you have a 0.1 chance of being punished. But who cares if they will erase your memory anyway.
Actually we are kinda living in that experiment—we all gonna die eventually, so why bother doing stuff if you wont care after you die. But I guess we just got used to suppress that thought, otherwise nothing gonna be done. So drink.
For all intents and purposes it’s equivalent to say “you have only one shot” and after memory erasure it’s not you anymore, but a person equivalent to other version of you next room.
Let’s assume “it’s not you anymore” is false. At least for a moment (even if it goes against LDT or something else).
Yes, you have a 0.1 chance of being punished. But who cares if they will erase your memory anyway.
Okay, let’s imagine that you doing that experiment for 9999999 times, and then you get back all your memories.
You still better drink. Probablities don’t change. Yes, if you are consistent with your choice (which you should be) - you have a 0.1 probability of being punished again and again and again. Also you have a 0.9 probability of being rewarded again and again and again.
Of course that seems counterintuitive, because in real life a perspective of “infinite punishment” (or nearly infinite punishment) is usually something to be avoided at all costs, even if you don’t get reward. That’s because in real life your utility scales highly non-linearly, and even if single punishment and single reward have equal utility measure − 9999999 punishments in a row is a larger utility loss than a utility gain from 9999999 rewards.
Also in real life you don’t lose your memory every 5 seconds and have a chance to learn on your mistakes.
But if we talking about spherical decision theory in a vacuum—you should drink.
I think you’re going for the most trivial interpretation instead of trying to explore interesting/unique aspects of the setup. (Not implying any blame. And those “interesting” aspects may not actually exist.) I’m not good at math, but not that bad to not know the most basic 101 idea of multiplying utilities by probabilities.
I’m trying to construct a situation (X) where the normal logic of probability breaks down, because each possibility is embodied by a real person and all those persons are in a conflict with each other.
Maybe it’s impossible to construct such situation, for example because any normal situation can be modeled the same way (different people in different worlds who don’t care about each other or even hate each other). But the possibility of such situation is an interesting topic we could explore.
Here’s another attempt to construct “situation X”:
We have 100 persons.
1 person has 99% chance to get big reward and 1% chance to get nothing. If they drink.
99 persons each have 0.0001% chance to get big punishment and 99.9999% chance to get nothing.
Should a person drink? The answer “yes” is a policy which will always lead to exploiting 99 persons for the sake of 1 person. If all those persons hate each other, their implicit agreement to such policy seems strange.
Here’s an explanation of what I’d like to explore from another angle.
Imagine I have a 99% chance to get reward and 1% chance to get punishment. If I take a pill. I’ll take the pill. If we imagine that each possibility is a separate person, this decision can be interpreted in two ways:
1 person altruistically sacrifices their well-being for the sake of 99 other persons.
100 persons each think, egoistically, “I can get lucky”. Only 1 person is mistaken.
And the same is true for other situations involving probability. But is there any situation (X) which could differentiate between “altruistic” and “egoistic” interpretations?
A stupid question about anthropics and [logical] decision theories. Could we “disprove” some types of anthropic reasoning based on [logical] consistency? I struggle with math, so please keep the replies relatively simple.
Imagine 100 versions of me, I’m one of them. We’re all egoists, each one of us doesn’t care about the others.
We’re in isolated rooms, each room has a drink. 90 drinks are rewards, 10 drink are punishments. Everyone is given the choice to drink or not to drink.
The setup is iterated (with memory erasure), everyone gets the same type of drink each time. If you got the reward, you get the reward each time. Only you can’t remember that.
If I reason myself into drinking (reasoning that I have a 90% chance of reward), from the outside it would look as if 10 egoists have agreed (very conveniently, to the benefit of others) to suffer again and again… is it a consistent possibility?
I guess you’ve made it more confusing than it needs to be by introducing memory erasure to this setup. For all intents and purposes it’s equivalent to say “you have only one shot” and after memory erasure it’s not you anymore, but a person equivalent to other version of you next room.
So what we got is many different people in different spacetime boxes, with only one shot, and yes, you should drink. Yes, you have a 0.1 chance of being punished. But who cares if they will erase your memory anyway.
Actually we are kinda living in that experiment—we all gonna die eventually, so why bother doing stuff if you wont care after you die. But I guess we just got used to suppress that thought, otherwise nothing gonna be done. So drink.
Let’s assume “it’s not you anymore” is false. At least for a moment (even if it goes against LDT or something else).
Let’s assume that the persons do care.
Okay, let’s imagine that you doing that experiment for 9999999 times, and then you get back all your memories.
You still better drink. Probablities don’t change. Yes, if you are consistent with your choice (which you should be) - you have a 0.1 probability of being punished again and again and again. Also you have a 0.9 probability of being rewarded again and again and again.
Of course that seems counterintuitive, because in real life a perspective of “infinite punishment” (or nearly infinite punishment) is usually something to be avoided at all costs, even if you don’t get reward. That’s because in real life your utility scales highly non-linearly, and even if single punishment and single reward have equal utility measure − 9999999 punishments in a row is a larger utility loss than a utility gain from 9999999 rewards.
Also in real life you don’t lose your memory every 5 seconds and have a chance to learn on your mistakes.
But if we talking about spherical decision theory in a vacuum—you should drink.
I think you’re going for the most trivial interpretation instead of trying to explore interesting/unique aspects of the setup. (Not implying any blame. And those “interesting” aspects may not actually exist.) I’m not good at math, but not that bad to not know the most basic 101 idea of multiplying utilities by probabilities.
I’m trying to construct a situation (X) where the normal logic of probability breaks down, because each possibility is embodied by a real person and all those persons are in a conflict with each other.
Maybe it’s impossible to construct such situation, for example because any normal situation can be modeled the same way (different people in different worlds who don’t care about each other or even hate each other). But the possibility of such situation is an interesting topic we could explore.
Here’s another attempt to construct “situation X”:
We have 100 persons.
1 person has 99% chance to get big reward and 1% chance to get nothing. If they drink.
99 persons each have 0.0001% chance to get big punishment and 99.9999% chance to get nothing.
Should a person drink? The answer “yes” is a policy which will always lead to exploiting 99 persons for the sake of 1 person. If all those persons hate each other, their implicit agreement to such policy seems strange.
Here’s an explanation of what I’d like to explore from another angle.
Imagine I have a 99% chance to get reward and 1% chance to get punishment. If I take a pill. I’ll take the pill. If we imagine that each possibility is a separate person, this decision can be interpreted in two ways:
1 person altruistically sacrifices their well-being for the sake of 99 other persons.
100 persons each think, egoistically, “I can get lucky”. Only 1 person is mistaken.
And the same is true for other situations involving probability. But is there any situation (X) which could differentiate between “altruistic” and “egoistic” interpretations?