Keep in mind here that I’m steelmanning someone else’s argument, perhaps improperly. I don’t want to put words in anyone else’s mouth. That said, I used the term ‘purity’ in loose analogy to a ‘pure’ programming language, wherein one exception is sufficient to remove much of the possible gains.
Continuing the steelmanning, however, I’d say that while no human can achieve epistemic perfection, there’s a large class of epistemic failures that you only recognize if you’re striving for perfection. Striving for purity, not purity itself, is what gets you the gains.
So8ers, you’re completely accurate in your interpretation of my argument. I’m going to read some more of your previous posts before responding much to your first comment here.
Yes, as Eliezer put it somewhat dramatically here:
If you once tell a lie, the truth is ever after your enemy.
To expand on this in context, as long as you are striving for the truth any evidence you come across helps you, but once you choose to believe a lie you must forever avoid dis-confirming evidence.
Yes, it’s pretty much impossible to tell a lie without hurting other people, or at least interfering with them; that’s the point of lying, after all. But right now we’re talking about the harm one does to oneself by lying; I submit that there needn’t be any.
One distinction I don’t know if it matters, but many discussions fail to mention at all, is the distinction between telling a lie and maintaining it/keeping the secret. Many of the epistemic arguments seem to disappear if you’ve previously made it clear you might lie to someone, you intend to tell the truth a few weeks down the line, and if pressed or questioned you confess and tell the actual truth rather than try to cover it with further lies.
Edit: also, have some kind of oat and special circumstance where you will in fact never lie, but precommit to only use it for important things or give it a cost in some way so you won’t be pressed to give it for everything.
I think you and fezziwig aren’t disagreeing. You’re saying as an empirical matter that lying can (and maybe often does) harm the liar. He’s just saying that it doesn’t necessarily harm the liar, and indeed it may well be that lies are often a net benefit. These are compatible claims.
You’ve drawn an important distinction, between believing a lie and telling one. Right now we’re talking about lying to ourselves so the difference isn’t very great, but be very careful with that quote in general.
What is that “purity” you’re talking about? I didn’t realize humans could achieve epistemic perfection.
Keep in mind here that I’m steelmanning someone else’s argument, perhaps improperly. I don’t want to put words in anyone else’s mouth. That said, I used the term ‘purity’ in loose analogy to a ‘pure’ programming language, wherein one exception is sufficient to remove much of the possible gains.
Continuing the steelmanning, however, I’d say that while no human can achieve epistemic perfection, there’s a large class of epistemic failures that you only recognize if you’re striving for perfection. Striving for purity, not purity itself, is what gets you the gains.
So8ers, you’re completely accurate in your interpretation of my argument. I’m going to read some more of your previous posts before responding much to your first comment here.
Yes, as Eliezer put it somewhat dramatically here:
To expand on this in context, as long as you are striving for the truth any evidence you come across helps you, but once you choose to believe a lie you must forever avoid dis-confirming evidence.
You’ve drawn an important distinction, between believing a lie and telling one. Your formulation is correct, but Eliezer’s is wrong.
Telling a lie has it’s own problems, as I discuss here.
Yes, it’s pretty much impossible to tell a lie without hurting other people, or at least interfering with them; that’s the point of lying, after all. But right now we’re talking about the harm one does to oneself by lying; I submit that there needn’t be any.
One distinction I don’t know if it matters, but many discussions fail to mention at all, is the distinction between telling a lie and maintaining it/keeping the secret. Many of the epistemic arguments seem to disappear if you’ve previously made it clear you might lie to someone, you intend to tell the truth a few weeks down the line, and if pressed or questioned you confess and tell the actual truth rather than try to cover it with further lies.
Edit: also, have some kind of oat and special circumstance where you will in fact never lie, but precommit to only use it for important things or give it a cost in some way so you won’t be pressed to give it for everything.
Did you even read the comment I linked to? It’s whole point was about the harm you do to yourself and your cause by lying.
I think you and fezziwig aren’t disagreeing. You’re saying as an empirical matter that lying can (and maybe often does) harm the liar. He’s just saying that it doesn’t necessarily harm the liar, and indeed it may well be that lies are often a net benefit. These are compatible claims.
You’ve drawn an important distinction, between believing a lie and telling one. Right now we’re talking about lying to ourselves so the difference isn’t very great, but be very careful with that quote in general.
I can already predict, though, that much or my response will include material from here and here.
Could you give some examples?
I am not sure which class you’re talking about… again, can you provide some examples?