Would you kill babies if it was intrinsically the right thing to do? If not, under what other circumstances would you not do the right thing to do? If yes, how right would it have to be, for how many babies?
EDIT IN RESPONSE: My intended point had been that sometimes you do have to fight the hypothetical.
Would you kill babies if it was intrinsically the right thing to do?
Probably not.
If not, under what other circumstances would you not do the right thing to do?
Obviously whenever the force of morality on my volition is overcome by the force of other non-moral preferences that go in an opposite direction. (a mere aesthetic preference against baby-killing might suffice, likewise not wanting to go to jail or executed)
What about consequentialism? What if we’d get a benevolent AI as a reward?
We should never fight the hypothetical. If we get undesirable results in a hypothetical, that’s important information regarding our decision algorithm. It’s like getting a chance to be falsified and not wanting to face it. We could also just fight Parfit’s Hitchhiker, Newcomb’s. We shouldn’t, and neither should we here.
Is there a difference between fighting the hypothetical and recognizing that the hypothetical is badly defined and needs so much unpacking that it’s not worth the effort? This falls into the latter category IMO.
“Negative impact on happiness” is far too broad a concept, “theism” is a huge cluster of ideas, and the idea of harm/benefit on different individuals over different timescales has to be part of the decision. Separating these out enough to even know what the choice you’re facing is will likely render the excercise pointless.
My gut feel is that if this were unpacked enough to be a scenario that’s well-defined enough to really consider, the conundrum would dissolve (or rather, it would be as complicated as the real world but not teach us anything about reality).
Short, speculative, personal answer: there may be individual cases where short-term lies are beneficial to the target in addition to the liar, but they are very unlikely to exist on any subject that has wide-ranging long-term decision impact.
If you accept the traditional assumptions of Christianity (well, the ones about “what will happen if I do X,” not about “is X right?”), killing babies is pretty clearly the right thing. And still almost nobody does it, or has any desire to do it.
A just-baptized infant, as far as I know, is pretty much certain to go to Heaven in the end. Whereas if it has time to grow up it has a fair chance of dying in a state of mortal sin and going to Hell. By killing it young you are very likely saving it from approximately infinite suffering, at the price of sending yourself to Hell and making its parents sad. Since you can only go to Hell once, if you kill more than one or two babies then you’re clearly increasing global utility, albeit at great cost to yourself. And yet Christians are not especially likely to kill babies.
Yes, none, any amount at all for any amount at all...assuming no akrasia, and as long as you don’t mean ‘right thing to do’ in some kind of merely conventional sense. But that’s just because, without quotation marks, the right thing to do is the formal object of a decision procedure.
If that’s so, then your question is similar to this:
Would you infer that P if P were the consequent of a sound argument? If not, under what other circumstances would you not infer the consequent of a sound argument?
I don’t see how this relates to the original post, this strikes me as a response to a claim of objective/intrinsic morality rather than the issue of resolving emotional basilisks vis-a-vis the litany of tarsky. Are you just saying “it really depends”?
This comment fails to address the post in any way whatsoever. No claim is made of the “right” thing to do; a hypothetical is offered, and the question asked is “what do you do?” It is not even the case that the hypothetical rests on an idea of an intrinsic “right thing” to do, instead asking us to measure how much we value knowing the truth vs happiness/lifespan, and how much we value the same for others. It’s not an especially interesting or original question, but it does not make any claims which are relevant to your comment.
EDIT: That does make more sense, although I’d never seen that particular example used as “fighting the hypothetical”, more just that “the right thing” is insufficiently defined for that sort of thing. Downvote revoked, but it’s still not exactly on point to me. I also don’t agree that you need to fight the hypothetical this time, other than to get rid of the particular example.
You’ve brought in moral realism, which isnt relevant.
“Would you do X, if it was epistemically rational, but not instrumentally rational”
“Would you do Y if it was instrumentally rational, but not epistemically rational”
If two concepts arent the same under all possible circumstances, they aren’t the same concept. Hypotheticals are an appropriate way of determining that.
Would you kill babies if it was intrinsically the right thing to do? If not, under what other circumstances would you not do the right thing to do? If yes, how right would it have to be, for how many babies?
EDIT IN RESPONSE: My intended point had been that sometimes you do have to fight the hypothetical.
Probably not.
Obviously whenever the force of morality on my volition is overcome by the force of other non-moral preferences that go in an opposite direction. (a mere aesthetic preference against baby-killing might suffice, likewise not wanting to go to jail or executed)
What about consequentialism? What if we’d get a benevolent AI as a reward?
We should never fight the hypothetical. If we get undesirable results in a hypothetical, that’s important information regarding our decision algorithm. It’s like getting a chance to be falsified and not wanting to face it. We could also just fight Parfit’s Hitchhiker, Newcomb’s. We shouldn’t, and neither should we here.
Is there a difference between fighting the hypothetical and recognizing that the hypothetical is badly defined and needs so much unpacking that it’s not worth the effort? This falls into the latter category IMO.
“Negative impact on happiness” is far too broad a concept, “theism” is a huge cluster of ideas, and the idea of harm/benefit on different individuals over different timescales has to be part of the decision. Separating these out enough to even know what the choice you’re facing is will likely render the excercise pointless.
My gut feel is that if this were unpacked enough to be a scenario that’s well-defined enough to really consider, the conundrum would dissolve (or rather, it would be as complicated as the real world but not teach us anything about reality).
Short, speculative, personal answer: there may be individual cases where short-term lies are beneficial to the target in addition to the liar, but they are very unlikely to exist on any subject that has wide-ranging long-term decision impact.
If you accept the traditional assumptions of Christianity (well, the ones about “what will happen if I do X,” not about “is X right?”), killing babies is pretty clearly the right thing. And still almost nobody does it, or has any desire to do it.
A just-baptized infant, as far as I know, is pretty much certain to go to Heaven in the end. Whereas if it has time to grow up it has a fair chance of dying in a state of mortal sin and going to Hell. By killing it young you are very likely saving it from approximately infinite suffering, at the price of sending yourself to Hell and making its parents sad. Since you can only go to Hell once, if you kill more than one or two babies then you’re clearly increasing global utility, albeit at great cost to yourself. And yet Christians are not especially likely to kill babies.
Yes, none, any amount at all for any amount at all...assuming no akrasia, and as long as you don’t mean ‘right thing to do’ in some kind of merely conventional sense. But that’s just because, without quotation marks, the right thing to do is the formal object of a decision procedure.
If that’s so, then your question is similar to this:
Would you infer that P if P were the consequent of a sound argument? If not, under what other circumstances would you not infer the consequent of a sound argument?
I don’t see how this relates to the original post, this strikes me as a response to a claim of objective/intrinsic morality rather than the issue of resolving emotional basilisks vis-a-vis the litany of tarsky. Are you just saying “it really depends”?
This comment fails to address the post in any way whatsoever. No claim is made of the “right” thing to do; a hypothetical is offered, and the question asked is “what do you do?” It is not even the case that the hypothetical rests on an idea of an intrinsic “right thing” to do, instead asking us to measure how much we value knowing the truth vs happiness/lifespan, and how much we value the same for others. It’s not an especially interesting or original question, but it does not make any claims which are relevant to your comment.
EDIT: That does make more sense, although I’d never seen that particular example used as “fighting the hypothetical”, more just that “the right thing” is insufficiently defined for that sort of thing. Downvote revoked, but it’s still not exactly on point to me. I also don’t agree that you need to fight the hypothetical this time, other than to get rid of the particular example.
You’ve brought in moral realism, which isnt relevant.
“Would you do X, if it was epistemically rational, but not instrumentally rational”
“Would you do Y if it was instrumentally rational, but not epistemically rational”
If two concepts arent the same under all possible circumstances, they aren’t the same concept. Hypotheticals are an appropriate way of determining that.