And their claim afterwards? Any threat you can make, they can make. You see why this is a dead end?
That’s kind of what I’m trying to point out here. It is a dead end, but I’m actually claiming the below. Sure, someone else can also claim the below as well. We can both make the claims. Now, who do you believe more?
But lets formalize my claim. My claim is that I will make n+1 utilitons happen if n is positive or n-1 utilitons happen is n is negative, as long as you do the opposite of what they tell you to do.
Where n is how many utilitons they offer given any result.
I’m outside of your conception of time. So if they make the threat after this is of no concern to me.
I’m outside of your conception of time. So if they make the threat after this is of no concern to me.
You can’t just wave your hands like that. What if the mugger offers a more complicated deal like a 2-step reward, where the second step overcomes your penalty? Are you just going to say ‘fine my precommitment is to the net value’? But then the mugger can just toss in a Turing machine to his offer, and now your attempt to analyze his offer is equivalent to solving a halting problem! If you claim to have an oracle on hand, so can he, and that just relativizes the problem because with an oracle, now there are meta-halting problems… etc.
I’m afraid you still haven’t shown me enough evidence. If you’ll bear with me a moment longer, I’ll try to explain it better.
But then the mugger can just toss in a Turing machine to his offer, and now your attempt to analyze his offer is equivalent to solving a halting problem
I want you to think of exactly the consequences of that. Now you have two omnipotent god entities, each offering you a more or less random result. All you know about the result is that it’s going to be positive or negative. One offers you a random positive result if you kill a baby, one offers you a random negative result if you kill a baby. Do you get, on average and over a very large sample size, more utility from killing the baby or not killing her?
Now you have two omnipotent god entities, each offering you a more or less random result. All you know about the result is that it’s going to be positive or negative.
One of which is in a temporally advantaged position in which he can do anything you can do and do more in addition to that—a strictly superior position.
Do you get, on average and over a very large sample size, more utility from killing the baby or not killing her?
Without auxiliary arguments about what sample space we are drawing from, I don’t see how you could possibly come to any kind of conclusion about this.
One of which is in a temporally advantaged position in which he can do anything you can do and do more in addition to that—a strictly superior position.
Sorry, explain to me how this hypothetical god-being can exceed my threat consistently? Presuming we are both from the same privileged outside-your-time perspective?
That’s kind of what I’m trying to point out here. It is a dead end, but I’m actually claiming the below. Sure, someone else can also claim the below as well. We can both make the claims. Now, who do you believe more?
But lets formalize my claim. My claim is that I will make n+1 utilitons happen if n is positive or n-1 utilitons happen is n is negative, as long as you do the opposite of what they tell you to do.
Where n is how many utilitons they offer given any result.
I’m outside of your conception of time. So if they make the threat after this is of no concern to me.
You can’t just wave your hands like that. What if the mugger offers a more complicated deal like a 2-step reward, where the second step overcomes your penalty? Are you just going to say ‘fine my precommitment is to the net value’? But then the mugger can just toss in a Turing machine to his offer, and now your attempt to analyze his offer is equivalent to solving a halting problem! If you claim to have an oracle on hand, so can he, and that just relativizes the problem because with an oracle, now there are meta-halting problems… etc.
Your strategy doesn’t work. Deal with it.
I’m afraid you still haven’t shown me enough evidence. If you’ll bear with me a moment longer, I’ll try to explain it better.
I want you to think of exactly the consequences of that. Now you have two omnipotent god entities, each offering you a more or less random result. All you know about the result is that it’s going to be positive or negative. One offers you a random positive result if you kill a baby, one offers you a random negative result if you kill a baby. Do you get, on average and over a very large sample size, more utility from killing the baby or not killing her?
One of which is in a temporally advantaged position in which he can do anything you can do and do more in addition to that—a strictly superior position.
Without auxiliary arguments about what sample space we are drawing from, I don’t see how you could possibly come to any kind of conclusion about this.
Sorry, explain to me how this hypothetical god-being can exceed my threat consistently? Presuming we are both from the same privileged outside-your-time perspective?