I’m afraid you still haven’t shown me enough evidence. If you’ll bear with me a moment longer, I’ll try to explain it better.
But then the mugger can just toss in a Turing machine to his offer, and now your attempt to analyze his offer is equivalent to solving a halting problem
I want you to think of exactly the consequences of that. Now you have two omnipotent god entities, each offering you a more or less random result. All you know about the result is that it’s going to be positive or negative. One offers you a random positive result if you kill a baby, one offers you a random negative result if you kill a baby. Do you get, on average and over a very large sample size, more utility from killing the baby or not killing her?
Now you have two omnipotent god entities, each offering you a more or less random result. All you know about the result is that it’s going to be positive or negative.
One of which is in a temporally advantaged position in which he can do anything you can do and do more in addition to that—a strictly superior position.
Do you get, on average and over a very large sample size, more utility from killing the baby or not killing her?
Without auxiliary arguments about what sample space we are drawing from, I don’t see how you could possibly come to any kind of conclusion about this.
One of which is in a temporally advantaged position in which he can do anything you can do and do more in addition to that—a strictly superior position.
Sorry, explain to me how this hypothetical god-being can exceed my threat consistently? Presuming we are both from the same privileged outside-your-time perspective?
I’m afraid you still haven’t shown me enough evidence. If you’ll bear with me a moment longer, I’ll try to explain it better.
I want you to think of exactly the consequences of that. Now you have two omnipotent god entities, each offering you a more or less random result. All you know about the result is that it’s going to be positive or negative. One offers you a random positive result if you kill a baby, one offers you a random negative result if you kill a baby. Do you get, on average and over a very large sample size, more utility from killing the baby or not killing her?
One of which is in a temporally advantaged position in which he can do anything you can do and do more in addition to that—a strictly superior position.
Without auxiliary arguments about what sample space we are drawing from, I don’t see how you could possibly come to any kind of conclusion about this.
Sorry, explain to me how this hypothetical god-being can exceed my threat consistently? Presuming we are both from the same privileged outside-your-time perspective?