I don’t think that it’s rewarding hypocrisy. I think that this moral stance is about punishing deliberate liars. If you are more self-aware, then yes, you would have more opportunities to deliberately lie (about, for example, what attracts you to a given partner); but this is not, to my mind, a reason to avoid self-awareness. Similarly, if you understand more science, then this gives you the option of creating better, more believable lies; but this is not a reason to reject the study of science.
I don’t think that it’s rewarding hypocrisy. I think that this moral stance is about punishing deliberate liars. If you are more self-aware, then yes, you would have more opportunities to deliberately lie (about, for example, what attracts you to a given partner);
Self awareness successfully penalized.
but this is not, to my mind, a reason to avoid self-awareness.
I often ignore punishment too. After all, punishment of me isn’t something I want to reward! But when I refuse to respond to an incentive I do not try claim that the incentive does not exist.
Similarly, if you understand more science, then this gives you the option of creating better, more believable lies; but this is not a reason to reject the study of science.
Not the same. (Unless you want to also declare that the social environment is one in which people often say that the world is flat and that people declining to say the world is flat are at a disadvantage.)
Oh, so the idea is that if you’re really clueless, then you’re not lying to your partner? You just feel compelled to bring them to fancy dinner parties but never spend one-on-one time with them and you don’t know why? Sorry, I’m still having trouble understanding. =/
I wasn’t quite going for a moral stance that punishes deliberate liars, but I was going for one where the people cooperate to maximize … combined utility, I guess? If both people are happy, it’s better than an arrangement where one person is making the other unhappy. Which sort of requires honesty, because if you don’t tell the other person your real utility function, they won’t be able to help with it. And if you act according to a different utility function than you tell them about, that will reflect in your actions and they’ll be able to tell something’s up.
maximize … combined utility, I guess? If both people are happy, it’s better than an arrangement where one person is making the other unhappy.
I agree that both being happy is better than one making the other unhappy, but it’s important to note that
The two are not mutually exclusive: I could reduce your happiness, but be unable to overcome your naturally sunny disposition.
One happy one unhappy might have a higher “combined utility” than both happy, if one is a sadistic utility monster.
if you don’t tell the other person your real utility function, they won’t be able to help with it.
No-one’s ever told me their utility function, but I still think I’ve helped them. When I hold a door open for someone, I help them, but they didn’t tell me any coefficients.
if you act according to a different utility function than you tell them about, that will reflect in your actions and they’ll be able to tell something’s up.
If people could always tell something was up there’d be no unknowing trophy spouses and hence no problem.
That just works because most people appreciate you opening doors. If you met someone that hated having their door opened, you’d stop! right? And you wouldn’t really know they hate it unless they tell you honestly! Or maybe you’d be able to tell because they cringe and grimace every time you do it, which is what I mean by actions reflecting happiness. Maybe they wouldn’t even know why they cringe and grimace, but you could experiment and tell it was door-related.
Yes there are scenarios where you need to ask in order to help people. But there are also scenarios where you don’t, and in the comment I was replying to you suggested that one had to ask to help.
Yeah, I think I meant that communication happens somehow, either explicitly or through cringing-like behavior. But you’re right, I didn’t combine utility properly in my earlier comment. I wanted a way to penalize unhappiness more. Like if something makes me reeeally happy and the other person a bit unhappy, it should be up to the other person to decide if I get to do it. In the sense that unhappiness is unpleasantness and not quite the same as absence of happiness. Arr, complicated.
(Unless you want to also declare that the social environment is one in which people often say that the world is flat and that people declining to say the world is flat are at a disadvantage.)
Not sure how much of this is true, but I hear rumours about that being the case (if you replace “the world is flat” with something like “humans were intelligently designed”) in certain geographical locales. (I do hope that Conservapedia is eventually revealed to be just a parody, though.)
EDIT: To avoid appearing to be one-sided, I’ll point out that the equivalent of that at the other end of the political spectrum is something like “all races have the same average intelligence”.
I don’t think that it’s rewarding hypocrisy. I think that this moral stance is about punishing deliberate liars. If you are more self-aware, then yes, you would have more opportunities to deliberately lie (about, for example, what attracts you to a given partner);
Self awareness successfully penalized.
I disagree. Greater self-awareness creates greater opportunity to lie, but it does not compel the person to lie. Where is the penalty in that?
Similarly, if you understand more science, then this gives you the option of creating better, more believable lies; but this is not a reason to reject the study of science.
Not the same. (Unless you want to also declare that the social environment is one in which people often say that the world is flat and that people declining to say the world is flat are at a disadvantage.)
I don’t understand your point. Given what little I do understand, I suspect that it may be related to my generally poor awareness of social environments. What social environment are you assuming?
I disagree. Greater self-awareness creates greater opportunity to lie, but it does not compel the person to lie. Where is the penalty in that?
The person who sincerely, but falsely believes p is not lying when they assert p. Someone who has greater awareness and knows that p is false has one fewer option to chose from morally: he can’t say p, because that would be lying. This becomes a problem especially when there is a social cost attached to not saying p, in which case the person with greater knowledge is effectively penalized: they have to either do something immoral (lying) or incur the costs of failing to say p (or, God forbid, saying ~p!).
The person who sincerely, but falsely believes p is not lying when they assert p. Someone who has greater awareness and knows that p is false has one fewer option to chose from morally: he can’t say p, because that would be lying.
True, and true.
This becomes a problem especially when there is a social cost attached to not saying p, in which case the person with greater knowledge is effectively penalized: they have to either do something immoral (lying) or incur the costs of failing to say p (or, God forbid, saying ~p!).
Yes; but why should there be a penalty for not saying p? Surely it is just as likely, on average, that there will be a penalty for not saying the inverse of p (in which case greater self-awareness rewards instead of penalizes).
Yes; but why should there be a penalty for not saying p? Surely it is just as likely, on average, that there will be a penalty for not saying the inverse of p (in which case greater self-awareness rewards instead of penalizes).
In the case that was the starting point of this dicussion, there surely is a penalty for saying ~p, but quite possibly also one for failing to say p: your partner might complain if they never hear nice things about themselves from you (or at least not nice things of the kind they want to hear).
On average, I would expect it to be more likely that there is a social penalty for failing to say something that only a very self-aware person would not believe (i.e. for failing to go along with the “official narrative”) than that there is a social penalty for not saying something that only a very self-aware person would know (thereby forcing non-self-aware people to lie).
In the case that was the starting point of this dicussion, there surely is a penalty for saying ~p, but quite possibly also one for failing to say p: your partner might complain if they never hear nice things about themselves from you (or at least not nice things of the kind they want to hear).
Ah; so you’re not arguing that there’s a moral penalty for self-awareness in all situations, you’re saying that there’s a moral penalty for self-awareness in a specific situation! (Apologies; I was trying to consider the rule as applied in general).
Thank you, that helped to clear things up.
And, just to make sure that there’s no more assumption traps (i.e. where we each assume that something mutually exclusive is obvious) I will describe my understanding of that situation (correct me if I’m wrong):
A person finds a romantic partner to which they are attracted. He (or she) compliments said partner on some aspect which he (or she) finds attractive only due to the halo effect; on the basis of these compliments, both partners enter a long-term romantic relationship. The person later improves their self-awareness, and realises that the earlier compliments were only due to the halo effect; admitting so then carries a social penalty.
In that case, I would agree; however, improved self-knowledge earlier in the process can head off the problem entirely. So it’s not penalizing self-knowledge; it’s rather penalizing the earlier lack of self-knowledge.
Ah; so you’re not arguing that there’s a moral penalty for self-awareness in all situations, you’re saying that there’s a moral penalty for self-awareness in a specific situation!
Yes, exactly. Sorry, I didn’t quite catch that you thought we were talking about a general rule or I would have cleared this up earlier.
In that case, I would agree; however, improved self-knowledge earlier in the process can head off the problem entirely. So it’s not penalizing self-knowledge; it’s rather penalizing the earlier lack of self-knowledge.
I don’t think so. It’s penalizing becoming self-aware. After all, if the person never became self-aware, she would never incur the penalty.
I don’t think so. It’s penalizing becoming self-aware. After all, if the person never became self-aware, she would never incur the penalty.
And, similarly, if a person never hits the ground, he never incurs any injury from falling. If he becomes self-aware quickly enough, then he takes comparatively minor social damage—as a man who falls and quickly hits the ground may only twist an ankle. If he becomes self-aware only after twenty years of marriage, then he potentially takes severe social damage; as a man who falls from a skyscraper, take severe damage when he hits the ground.
So, yes, I can see why you make that statement; it is a reasonable statement, but I think it places the emphasis on the wrong part of the fall.
I don’t think that it’s rewarding hypocrisy. I think that this moral stance is about punishing deliberate liars. If you are more self-aware, then yes, you would have more opportunities to deliberately lie (about, for example, what attracts you to a given partner); but this is not, to my mind, a reason to avoid self-awareness. Similarly, if you understand more science, then this gives you the option of creating better, more believable lies; but this is not a reason to reject the study of science.
Self awareness successfully penalized.
I often ignore punishment too. After all, punishment of me isn’t something I want to reward! But when I refuse to respond to an incentive I do not try claim that the incentive does not exist.
Not the same. (Unless you want to also declare that the social environment is one in which people often say that the world is flat and that people declining to say the world is flat are at a disadvantage.)
Oh, so the idea is that if you’re really clueless, then you’re not lying to your partner? You just feel compelled to bring them to fancy dinner parties but never spend one-on-one time with them and you don’t know why? Sorry, I’m still having trouble understanding. =/
I wasn’t quite going for a moral stance that punishes deliberate liars, but I was going for one where the people cooperate to maximize … combined utility, I guess? If both people are happy, it’s better than an arrangement where one person is making the other unhappy. Which sort of requires honesty, because if you don’t tell the other person your real utility function, they won’t be able to help with it. And if you act according to a different utility function than you tell them about, that will reflect in your actions and they’ll be able to tell something’s up.
I agree that both being happy is better than one making the other unhappy, but it’s important to note that
The two are not mutually exclusive: I could reduce your happiness, but be unable to overcome your naturally sunny disposition.
One happy one unhappy might have a higher “combined utility” than both happy, if one is a sadistic utility monster.
No-one’s ever told me their utility function, but I still think I’ve helped them. When I hold a door open for someone, I help them, but they didn’t tell me any coefficients.
If people could always tell something was up there’d be no unknowing trophy spouses and hence no problem.
That just works because most people appreciate you opening doors. If you met someone that hated having their door opened, you’d stop! right? And you wouldn’t really know they hate it unless they tell you honestly! Or maybe you’d be able to tell because they cringe and grimace every time you do it, which is what I mean by actions reflecting happiness. Maybe they wouldn’t even know why they cringe and grimace, but you could experiment and tell it was door-related.
Yes there are scenarios where you need to ask in order to help people. But there are also scenarios where you don’t, and in the comment I was replying to you suggested that one had to ask to help.
Yeah, I think I meant that communication happens somehow, either explicitly or through cringing-like behavior. But you’re right, I didn’t combine utility properly in my earlier comment. I wanted a way to penalize unhappiness more. Like if something makes me reeeally happy and the other person a bit unhappy, it should be up to the other person to decide if I get to do it. In the sense that unhappiness is unpleasantness and not quite the same as absence of happiness. Arr, complicated.
Not sure how much of this is true, but I hear rumours about that being the case (if you replace “the world is flat” with something like “humans were intelligently designed”) in certain geographical locales. (I do hope that Conservapedia is eventually revealed to be just a parody, though.)
EDIT: To avoid appearing to be one-sided, I’ll point out that the equivalent of that at the other end of the political spectrum is something like “all races have the same average intelligence”.
I disagree. Greater self-awareness creates greater opportunity to lie, but it does not compel the person to lie. Where is the penalty in that?
I don’t understand your point. Given what little I do understand, I suspect that it may be related to my generally poor awareness of social environments. What social environment are you assuming?
The person who sincerely, but falsely believes p is not lying when they assert p. Someone who has greater awareness and knows that p is false has one fewer option to chose from morally: he can’t say p, because that would be lying. This becomes a problem especially when there is a social cost attached to not saying p, in which case the person with greater knowledge is effectively penalized: they have to either do something immoral (lying) or incur the costs of failing to say p (or, God forbid, saying ~p!).
True, and true.
Yes; but why should there be a penalty for not saying p? Surely it is just as likely, on average, that there will be a penalty for not saying the inverse of p (in which case greater self-awareness rewards instead of penalizes).
In the case that was the starting point of this dicussion, there surely is a penalty for saying ~p, but quite possibly also one for failing to say p: your partner might complain if they never hear nice things about themselves from you (or at least not nice things of the kind they want to hear).
On average, I would expect it to be more likely that there is a social penalty for failing to say something that only a very self-aware person would not believe (i.e. for failing to go along with the “official narrative”) than that there is a social penalty for not saying something that only a very self-aware person would know (thereby forcing non-self-aware people to lie).
Ah; so you’re not arguing that there’s a moral penalty for self-awareness in all situations, you’re saying that there’s a moral penalty for self-awareness in a specific situation! (Apologies; I was trying to consider the rule as applied in general).
Thank you, that helped to clear things up.
And, just to make sure that there’s no more assumption traps (i.e. where we each assume that something mutually exclusive is obvious) I will describe my understanding of that situation (correct me if I’m wrong):
A person finds a romantic partner to which they are attracted. He (or she) compliments said partner on some aspect which he (or she) finds attractive only due to the halo effect; on the basis of these compliments, both partners enter a long-term romantic relationship. The person later improves their self-awareness, and realises that the earlier compliments were only due to the halo effect; admitting so then carries a social penalty.
In that case, I would agree; however, improved self-knowledge earlier in the process can head off the problem entirely. So it’s not penalizing self-knowledge; it’s rather penalizing the earlier lack of self-knowledge.
Yes, exactly. Sorry, I didn’t quite catch that you thought we were talking about a general rule or I would have cleared this up earlier.
I don’t think so. It’s penalizing becoming self-aware. After all, if the person never became self-aware, she would never incur the penalty.
And, similarly, if a person never hits the ground, he never incurs any injury from falling. If he becomes self-aware quickly enough, then he takes comparatively minor social damage—as a man who falls and quickly hits the ground may only twist an ankle. If he becomes self-aware only after twenty years of marriage, then he potentially takes severe social damage; as a man who falls from a skyscraper, take severe damage when he hits the ground.
So, yes, I can see why you make that statement; it is a reasonable statement, but I think it places the emphasis on the wrong part of the fall.