If anyone accepts a pascals mugging style trade off with full knowledge of the problem, then I will slowly torture to death 3^^^^3 sentient minds. Or a suitably higher number if they include a higher (plausible, from my external viewpoint) number. Rest assured I can at least match their raw computing power from where I am.
I go for a walk and meet a mugger. He hears about your precommitment and says, ‘ah well, I know what reward I can offer you which both overcomes your expressed probability that I will pay of 1/3^^^^3 and also the threat of −3^^^^3: I will offer you not 3^^^^3 but 3^^^^^^3! If you will simply multiply out the expected value and then subtract staticIP’s threat, you will find this is a very lucrative deal for you!’
I conclude his logic is unimpeachable and the net expected value so vast I would be a fool to not give him $5, and promptly do so.
a suitably higher number if they include a higher (plausible, from my external viewpoint) number. Rest assured I can at least match their raw computing power from where I am.
So I’ll add a couple of orders of magnitude. I’m certain he will as well. It ultimately comes down to which of us you believe more. Do you have any reason to believe him more then myself?
Any precommitment you make a mugger can just trivially overcome.
The same is true for me. Any threat the mugger makes I can trivially overcome, being a god entity and all that.
I’ll reward/punish a suitably higher number of negative or positive utilitons if they include a higher number of negative or positive utilitons. Rest assured I can at least match their raw computing power from where I am.
I’m always going to claim that my threat is equal to or greater then their threat. Make sense?
And their claim afterwards? Any threat you can make, they can make. You see why this is a dead end?
That’s kind of what I’m trying to point out here. It is a dead end, but I’m actually claiming the below. Sure, someone else can also claim the below as well. We can both make the claims. Now, who do you believe more?
But lets formalize my claim. My claim is that I will make n+1 utilitons happen if n is positive or n-1 utilitons happen is n is negative, as long as you do the opposite of what they tell you to do.
Where n is how many utilitons they offer given any result.
I’m outside of your conception of time. So if they make the threat after this is of no concern to me.
I’m outside of your conception of time. So if they make the threat after this is of no concern to me.
You can’t just wave your hands like that. What if the mugger offers a more complicated deal like a 2-step reward, where the second step overcomes your penalty? Are you just going to say ‘fine my precommitment is to the net value’? But then the mugger can just toss in a Turing machine to his offer, and now your attempt to analyze his offer is equivalent to solving a halting problem! If you claim to have an oracle on hand, so can he, and that just relativizes the problem because with an oracle, now there are meta-halting problems… etc.
I’m afraid you still haven’t shown me enough evidence. If you’ll bear with me a moment longer, I’ll try to explain it better.
But then the mugger can just toss in a Turing machine to his offer, and now your attempt to analyze his offer is equivalent to solving a halting problem
I want you to think of exactly the consequences of that. Now you have two omnipotent god entities, each offering you a more or less random result. All you know about the result is that it’s going to be positive or negative. One offers you a random positive result if you kill a baby, one offers you a random negative result if you kill a baby. Do you get, on average and over a very large sample size, more utility from killing the baby or not killing her?
Now you have two omnipotent god entities, each offering you a more or less random result. All you know about the result is that it’s going to be positive or negative.
One of which is in a temporally advantaged position in which he can do anything you can do and do more in addition to that—a strictly superior position.
Do you get, on average and over a very large sample size, more utility from killing the baby or not killing her?
Without auxiliary arguments about what sample space we are drawing from, I don’t see how you could possibly come to any kind of conclusion about this.
One of which is in a temporally advantaged position in which he can do anything you can do and do more in addition to that—a strictly superior position.
Sorry, explain to me how this hypothetical god-being can exceed my threat consistently? Presuming we are both from the same privileged outside-your-time perspective?
I go for a walk and meet a mugger. He hears about your precommitment and says, ‘ah well, I know what reward I can offer you which both overcomes your expressed probability that I will pay of 1/3^^^^3 and also the threat of −3^^^^3: I will offer you not 3^^^^3 but 3^^^^^^3! If you will simply multiply out the expected value and then subtract staticIP’s threat, you will find this is a very lucrative deal for you!’
I conclude his logic is unimpeachable and the net expected value so vast I would be a fool to not give him $5, and promptly do so.
So I’ll add a couple of orders of magnitude. I’m certain he will as well. It ultimately comes down to which of us you believe more. Do you have any reason to believe him more then myself?
So then your solution doesn’t work. Any precommitment you make a mugger can just trivially overcome.
You’re not following the process here. ‘Which of you I believe more’ is already being compensated for by additional increases in the promised reward.
The same is true for me. Any threat the mugger makes I can trivially overcome, being a god entity and all that.
I’m always going to claim that my threat is equal to or greater then their threat. Make sense?
And their claim afterwards? Any threat you can make, they can make. You see why this is a dead end?
(And what kind of decision theory requires a third party to make precommitments before you can make the right decision, anyway?)
3PDT, perhaps? ;)
That’s kind of what I’m trying to point out here. It is a dead end, but I’m actually claiming the below. Sure, someone else can also claim the below as well. We can both make the claims. Now, who do you believe more?
But lets formalize my claim. My claim is that I will make n+1 utilitons happen if n is positive or n-1 utilitons happen is n is negative, as long as you do the opposite of what they tell you to do.
Where n is how many utilitons they offer given any result.
I’m outside of your conception of time. So if they make the threat after this is of no concern to me.
You can’t just wave your hands like that. What if the mugger offers a more complicated deal like a 2-step reward, where the second step overcomes your penalty? Are you just going to say ‘fine my precommitment is to the net value’? But then the mugger can just toss in a Turing machine to his offer, and now your attempt to analyze his offer is equivalent to solving a halting problem! If you claim to have an oracle on hand, so can he, and that just relativizes the problem because with an oracle, now there are meta-halting problems… etc.
Your strategy doesn’t work. Deal with it.
I’m afraid you still haven’t shown me enough evidence. If you’ll bear with me a moment longer, I’ll try to explain it better.
I want you to think of exactly the consequences of that. Now you have two omnipotent god entities, each offering you a more or less random result. All you know about the result is that it’s going to be positive or negative. One offers you a random positive result if you kill a baby, one offers you a random negative result if you kill a baby. Do you get, on average and over a very large sample size, more utility from killing the baby or not killing her?
One of which is in a temporally advantaged position in which he can do anything you can do and do more in addition to that—a strictly superior position.
Without auxiliary arguments about what sample space we are drawing from, I don’t see how you could possibly come to any kind of conclusion about this.
Sorry, explain to me how this hypothetical god-being can exceed my threat consistently? Presuming we are both from the same privileged outside-your-time perspective?