Kant famously argued that lying is wrong, period. Even if the fate of the world depends on it.
I remember Eliezer saying something similar, though I can’t find it right now (the closest I could find was this ). It was something about the benefits of being the kind of person that doesn’t lie, even if the fate of the world is at stake. Because if you aren’t, the minute the fate of the world is at stake is the minute your word becomes worthless.
I recall it too. I think the key distinction is that if the choice was literally between lying and everyone in the world—including yourself—perishing, Kant would let us all die. Eliezer would not. What I took Eliezer to be saying (working from memory, I may try to find the post later) is that if you think the choice is between lying and the sun exploding (or something analogous) in any real life situation… you’re wrong. It’s far more likely that you’re rationalizing the way you’re compromising your values than that it’s actually necessary to compromise your values, given what we know about humans. So a consequentialist system implies basically deontological rules once human nature is taken into account.
Once again, this is all from my memory, so I could be wrong.
Although Eliezer didn’t put it precisely in these terms, he was sort of suggesting that if one could self-modify in such a way that it became impossible to break a certain sort of absolutely binding promise, it would be good to modify oneself in that way, even though it would mean that if the situation actually came up where you had to break the promise or let the world perish, you would have to let the world perish.
I remember Eliezer saying something similar, though I can’t find it right now (the closest I could find was this ). It was something about the benefits of being the kind of person that doesn’t lie, even if the fate of the world is at stake. Because if you aren’t, the minute the fate of the world is at stake is the minute your word becomes worthless.
I recall it too. I think the key distinction is that if the choice was literally between lying and everyone in the world—including yourself—perishing, Kant would let us all die. Eliezer would not. What I took Eliezer to be saying (working from memory, I may try to find the post later) is that if you think the choice is between lying and the sun exploding (or something analogous) in any real life situation… you’re wrong. It’s far more likely that you’re rationalizing the way you’re compromising your values than that it’s actually necessary to compromise your values, given what we know about humans. So a consequentialist system implies basically deontological rules once human nature is taken into account.
Once again, this is all from my memory, so I could be wrong.
Although Eliezer didn’t put it precisely in these terms, he was sort of suggesting that if one could self-modify in such a way that it became impossible to break a certain sort of absolutely binding promise, it would be good to modify oneself in that way, even though it would mean that if the situation actually came up where you had to break the promise or let the world perish, you would have to let the world perish.
I think the article you (and the parent comment) are talking about is this one