Right, so, I should give the preface that I’m not up for fully explaining why I think my position/opinion/reaction is true/correct. But to answer your question, these are parts that I objected to;
The title, “If we can’t lie to others, we will lie to ourselves”
The statement “I’d prefer the first outcome.”
“my loss function is a sum of two terms”
“In fact, this procedure always results in distorted estimates, no matter how large we make the penalty for bad predictions.”
And that’s where I stopped reading, to be honest. It sounds like he’s trying to prove to me that I must lie no matter what, and the content I read is overwhelmingly inadequate for getting me to take that seriously. It kind of feels like someone trying to argue to me that I must either stab other people, or stab myself, at least to some degree, at least paper cuts—and I just don’t need to take that kind of argument seriously. It feels similar to Pascal’s mugging, in that I’m confident that I don’t need to figure out why it’s wrong to know that it’s okay to ignore.
Of course, as a rationalist, it is deeply important to me that I accept that if it is true. And I am at least moderately curious about his argument. Just not curious enough to keep reading.
I don’t think his math is wrong, but I think his model is a wrong representation of the situation. If I had to guess without thinking to hard about what his actual model error is, it would be that my “loss function” is not a sum of two terms, but is instead a case-structure; just say the true thing, up until some level of other costs (like social) at which point just stop saying things, up until some further very high cost (like if my life is on the line), at which point it’s okay to lie.
The first few sections are best read as empirical claims about what’s evolutionarily useful for humans (though I agree that the language is sloppy and doesn’t make this clear). Later sections distinguish what we consciously want and what our brains have been optimised to achieve, and venture some suggestions for what we should do given the conflict. (And it includes a suggestion that it might be ok to give over-optimistic ETA, but it doesn’t really argue for it, and it’s not an important point.)
Your suggested alternate loss-function seems like a plausible description of your conscious desires, which may well be different from what evolution optimised us for.
Why / which part?
Right, so, I should give the preface that I’m not up for fully explaining why I think my position/opinion/reaction is true/correct. But to answer your question, these are parts that I objected to;
The title, “If we can’t lie to others, we will lie to ourselves”
The statement “I’d prefer the first outcome.”
“my loss function is a sum of two terms”
“In fact, this procedure always results in distorted estimates, no matter how large we make the penalty for bad predictions.”
And that’s where I stopped reading, to be honest. It sounds like he’s trying to prove to me that I must lie no matter what, and the content I read is overwhelmingly inadequate for getting me to take that seriously. It kind of feels like someone trying to argue to me that I must either stab other people, or stab myself, at least to some degree, at least paper cuts—and I just don’t need to take that kind of argument seriously. It feels similar to Pascal’s mugging, in that I’m confident that I don’t need to figure out why it’s wrong to know that it’s okay to ignore.
Of course, as a rationalist, it is deeply important to me that I accept that if it is true. And I am at least moderately curious about his argument. Just not curious enough to keep reading.
I don’t think his math is wrong, but I think his model is a wrong representation of the situation. If I had to guess without thinking to hard about what his actual model error is, it would be that my “loss function” is not a sum of two terms, but is instead a case-structure; just say the true thing, up until some level of other costs (like social) at which point just stop saying things, up until some further very high cost (like if my life is on the line), at which point it’s okay to lie.
The first few sections are best read as empirical claims about what’s evolutionarily useful for humans (though I agree that the language is sloppy and doesn’t make this clear). Later sections distinguish what we consciously want and what our brains have been optimised to achieve, and venture some suggestions for what we should do given the conflict. (And it includes a suggestion that it might be ok to give over-optimistic ETA, but it doesn’t really argue for it, and it’s not an important point.)
Your suggested alternate loss-function seems like a plausible description of your conscious desires, which may well be different from what evolution optimised us for.