But every choice has a nonzero probability of leading to torture. Your proposed moral stance amounts to “minimize the probability-times-intensity of torture”, to which a reasonable answer might be, “set off a nuclear holocaust annihilating all life on the planet”.
(And the distinction between torture and non-torture is—at least in the abstract—fuzzy. How much pain does it have to be to be torture?)
There is nothing you can do that makes it impossible that there will be torture. Therefore, every choice has a nonzero probability of being followed by torture. I’m not sure whether “leading to torture” is the best way to phrase this, though.
What he said. Also, if you are evaluating the rectitude of each possible choice by its consequences (i.e. using your utility function), it doesn’t matter if you actually (might) cause the torture or if it just (possibly) occurs within your light cone—you have to count it.
I believe you should count choices that can measurably change the probability of torture. If you can’t measure a change in the probability of torture, you should count that as no change. I believe this view more closely corresponds to current physical models than the infinite butterflies concept.
But if torture has infinite weight, even any change—even one too small to measure—has either infinite utility or infinite disutility. Which makes the situation even worse.
Anyway, I’m not arguing that you should measure it this way, I’m arguing that you don’t. Mathematically, the implications of your proposal do not correspond to the value judgements you endorse, and therefore the proposal doesn’t correspond to your actual algorithm, and should be abandoned.
Changes that are small enough to be beyond Heisenberg’s epistemological barrier cannot in principle be shown to exist. So, they acquire Easter Bunny-like status.
Changes that are within this barrier but beyond my measurement capabilities aren’t known to me; and, utility is an epistemological function. I can’t measure it, so I can’t know about it, so it doesn’t enter into my utility.
I think a bigger problem is the question of enduring a split second of torture in exchange for a huge social good. This sort of thing is ruled out by that utility function.
But that’s ridiculous. I would gladly exchange being tortured for a few seconds—say, waterboarding, like Christopher Hitchens suffered—for, say, an end to starvation worldwide!
More to the point, deleting infinities from your equations works sometimes—I’ve heard of it being done in quantum mechanics—but doing so with the noisy filter of your personal ignorance, or even the less-noisy filter of theoretical detectability, leaves wide open the possibility of inconsistencies in your system. It’s just not what a consistent moral framework looks like.
A utility function is just a way of describing the ranking of desirability of scenarios. I’m not convinced that singularities on the left can’t be a part of that description.
Singularities on the left I can’t rule out universally, but setting the utility of torture to negative infinity … well, I’ve told you my reasons for objecting. If you want me to spend more time elaborating, let me know; for my own part, I’m done.
There is no “Heisenberg’s epistemological barier”. Utility function is defined on everything that could possibly be, whether you know specific possibilities to be real or don’t. You are supposed to average over the set of possibilities that you can’t distinguish because of limited knowledge.
Everyone has their own utility function (whether they’re honest about it or not), I suppose. Personally, I would never try to place myself in the shoes of Laplace’s Demon. They’re probably those felt pointy jester shoes with the bells on the end.
If I am to choose between getting a glass of water or a cup of coffee, I am quite confident that neither choice will lead to torture. You certainly cannot prove that either choice will lead to torture. Absolute certainty has nothing to do with it, in my opinion.
But every choice has a nonzero probability of leading to torture. Your proposed moral stance amounts to “minimize the probability-times-intensity of torture”, to which a reasonable answer might be, “set off a nuclear holocaust annihilating all life on the planet”.
(And the distinction between torture and non-torture is—at least in the abstract—fuzzy. How much pain does it have to be to be torture?)
In real life or in this example? I don’t believe this is true in real life.
There is nothing you can do that makes it impossible that there will be torture. Therefore, every choice has a nonzero probability of being followed by torture. I’m not sure whether “leading to torture” is the best way to phrase this, though.
What he said. Also, if you are evaluating the rectitude of each possible choice by its consequences (i.e. using your utility function), it doesn’t matter if you actually (might) cause the torture or if it just (possibly) occurs within your light cone—you have to count it.
Are you referring to me? I’m a she.
headdesk
What Alicorn said, yes. Damnit, I thought I was doing pretty good at avoiding the pronoun problems...
Don’t worry about it. It was a safe bet, if you don’t know who I am and this is the context you have to work with ;)
Hey, don’t tell me what I’m not allowed to worry about! :P
(...geez, I feel like I’m about to be deleted as natter...)
I believe you should count choices that can measurably change the probability of torture. If you can’t measure a change in the probability of torture, you should count that as no change. I believe this view more closely corresponds to current physical models than the infinite butterflies concept.
But if torture has infinite weight, even any change—even one too small to measure—has either infinite utility or infinite disutility. Which makes the situation even worse.
Anyway, I’m not arguing that you should measure it this way, I’m arguing that you don’t. Mathematically, the implications of your proposal do not correspond to the value judgements you endorse, and therefore the proposal doesn’t correspond to your actual algorithm, and should be abandoned.
Changes that are small enough to be beyond Heisenberg’s epistemological barrier cannot in principle be shown to exist. So, they acquire Easter Bunny-like status.
Changes that are within this barrier but beyond my measurement capabilities aren’t known to me; and, utility is an epistemological function. I can’t measure it, so I can’t know about it, so it doesn’t enter into my utility.
I think a bigger problem is the question of enduring a split second of torture in exchange for a huge social good. This sort of thing is ruled out by that utility function.
But that’s ridiculous. I would gladly exchange being tortured for a few seconds—say, waterboarding, like Christopher Hitchens suffered—for, say, an end to starvation worldwide!
More to the point, deleting infinities from your equations works sometimes—I’ve heard of it being done in quantum mechanics—but doing so with the noisy filter of your personal ignorance, or even the less-noisy filter of theoretical detectability, leaves wide open the possibility of inconsistencies in your system. It’s just not what a consistent moral framework looks like.
I agree about the torture for a few seconds.
A utility function is just a way of describing the ranking of desirability of scenarios. I’m not convinced that singularities on the left can’t be a part of that description.
Singularities on the left I can’t rule out universally, but setting the utility of torture to negative infinity … well, I’ve told you my reasons for objecting. If you want me to spend more time elaborating, let me know; for my own part, I’m done.
There is no “Heisenberg’s epistemological barier”. Utility function is defined on everything that could possibly be, whether you know specific possibilities to be real or don’t. You are supposed to average over the set of possibilities that you can’t distinguish because of limited knowledge.
The equation involving Planck’s constant in the following link is not in dispute, and that equation does constitute an epistemological barrier:
http://en.wikipedia.org/wiki/Uncertainty_principle
Everyone has their own utility function (whether they’re honest about it or not), I suppose. Personally, I would never try to place myself in the shoes of Laplace’s Demon. They’re probably those felt pointy jester shoes with the bells on the end.
See Absolute certainty.
Proof left to the reader?
If I am to choose between getting a glass of water or a cup of coffee, I am quite confident that neither choice will lead to torture. You certainly cannot prove that either choice will lead to torture. Absolute certainty has nothing to do with it, in my opinion.
You either have absolute certainty in the statement that neither choice will lead to torture, or you allow some probability of it being incorrect.