Recite the Litany of Tarski a few times, if that helps: if you have a trait, you desire to believe that you have the trait. If you do not have a trait, you desire to believe that you do not have the trait.
On the other hand, the reason self-deception evolved was so that you could effectively signal and lie to others about your abilities. It might be a good idea to read up on only the positive traits that you might have.
In fact, if there ever was a place where you want to deliberately indulge in epistemic irrationality, this is probably it.
I’m curious as to the unexplained downvotes. Do people simply not like the idea that it might be instrumentally rational to be epistemically irrational in certain cases?
Note that we seem to have a case that the reason that this particular bias evolved is because it was selected for, specifically because it increased one’s ability to attract mates and allies. Of course, there might be benefits. But I don’t see a compelling case that they outweigh the costs.
So, if you want to be more likely to be single and lonely, go ahead and debais…
You could say of any systematic bias in humans that it evolved, specifically because it increased one’s ability to survive, reproduce, or both. If this is your true rejection, why do you not run screaming from this site, “They want me to de-bias in ways such that that, had the biases not been productive in the ancestral environment, I’d already be de-biased!”
Many cognitive biases arise from an approximation—some cheap and dirty trick—that held true enough in our EEA, but doesn’t now. For example, probability neglect, representativeness heuristic, short time horizon etc. These you want to debias.
Others arise from selective pressures that are very much alive and kicking. It seems that human social interaction has changed back to being more like the stone age in our modern society, except with much less murder. It seems to me that people very much play the same signalling games they used to play, and having positive self-beliefs seems like a good way to win at them.
The litany of Tarski is indeed a powerful principle, but this is exactly the kind of misuse of it that will cheapen it.
Bostrom and Sandberg (in your linked paper) suggest three reasons why we might want to change the design that evolution gave us:
Changed tradeoffs. We no longer live in the ancestral environment.
Value discordance. Evolution’s goal may not match our own.
Evolutionary restrictions. We might have tools that were not available to evolution.
On #2, I’ll note that evolution designed humans as temporary vessels, for the goal of propagating genes. Not, for example, for the goal of making you happy. You may prefer to hijack evolution’s design, in service of your own goals, rather than in service of your gene’s reproduction.
Lots of evolution’s adaptations (including many of the biases we discuss) are good for the propagation of the genes, at the cost of being bad for the individual human who suffers the bias. A self-aware human may wish to choose to reverse that tradeoff.
Surely having accurate positive self-beliefs is a win over having inaccurate positive self-beliefs, even if having inaccurate positive self-beliefs is a loss compared to having accurate negative self-beliefs. I don’t suggest that you should become luminous enough to say, “Wow, I suck in the following ways!” and then quit.
Surely having accurate positive self-beliefs is a win over having inaccurate positive self-beliefs
Sorry, I don’t get why? Why not just set all of your self-beliefs to “strongly positive”, to the extent that you can get away with it?
The criterion of instrumental optimality regarding personality self-beliefs is in conflict with the epistemic one. Why not just go the whole hog and believe you’re very kind, very generous, very witty, very honorable, very trustworthy, etc…
It’s possible, although seems unlikely on priors, that I’m relatively unusual in preferring that I actually be nice/smart/reasonable/friendly/etc. over preferring that I think that I’m those things. This seems to me much like preferring that my family be actually alive and well, over my merely thinking that they are alive and well.
From a purely practical standpoint, people might notice if you actually have negative personal traits, even if you signal not having them relatively well due to your positive self-image. They will then think you are an arrogant, deluded person (who also has whatever negative traits you are trying to signal away.)
They will then think you are an arrogant, deluded person
I think that you have a fundamentally flawed model of most other humans. You are modeling them as reasoning engines that reason logically from explicitly stated ethical principles.
I prefer to model people as adaptation executors who respond to subcommunications and signals in a way that was optimized by evolution, and then, if asked, confabulate verbal rationalizations for their behavior.
Arrogance and a pervasive positive self-image are strong signals of high status. People will respond positively to them. It is possible to push arrogance too far, especially it is too negative, resentful, and backed by an attitude of hating other people. This is because it signals lower status—high status people generally like others. But just a good deal of self-assurance, unshakable self confidence etc are good.
They will then think you are an arrogant, deluded person
I think that you have a fundamentally flawed model of most other humans. You are modeling them as reasoning engines that reason logically from explicitly stated ethical principles.
Have you ever met one of those people who tells bad jokes all the time? This seems an quintessential example of someone with a strong false positive self-image.
I prefer to model people as adaptation executors who respond to subcommunications and signals in a way that was optimized by evolution, and then, if asked, confabulate verbal rationalizations for their behavior.
What predictions does this model let you make? When have you seen it compellingly confirmed in situations where other models would have had you predict something else? It sounds dangerously vulnerable to epicyclic adaptation to individual cases that don’t align with it.
The ‘fake it until you make it’ school of self-improvement is based around this kind of model. For example, if you want to be a self-confident person and derive the benefits of self-confidence, start out ‘faking’ self-confidence and mimicking the behaviours and signals of self-confident people. Other people will generally respond to this as they would respond to someone who is ‘actually’ self confident and a virtuous circle will result in you eventually not having to fake the self confidence any more.
A prediction of this kind of model might therefore be that the best way to improve self-confidence is to consciously mimic the behaviours of self confident individuals rather than to try and ‘internally’ improve your self confidence. Anecdotally I see some evidence that this works but I also see some evidence that evolution has made people better at detecting fakers than a naive version of the model might suppose.
If you understand the subconscious mechanisms and how they were tuned to the old environment, and how the old differs from the new, you will eventually see better hacks.
I’m not going to talk about many of those here because I tried before and it went badly.
It sounds dangerously vulnerable to epicyclic adaptation to individual cases that don’t align with it.
As is the other model: the one where you model them as reasoning engines that reason logically from explicitly stated ethical principles. Here, you can just keep varying which of the many principles they are supposed to be following (as human commonsense morality contains so many different and mutually incompatible principles, so many circumstances, weaknesses of will, etc).
There are some solid experiments, e.g. moral dumbfounding, that back this up. Also, as soon as you expose people to a correct contrarian idea, you’ll see the people attack with a torrent of confabulated excuses.
I am quite fond of this model of people: I think it should be used more. Though agreed that we should test it, criticize it, etc.
Why not just set all of your self-beliefs to “strongly positive”, to the extent that you can get away with it? . . . Why not just go the whole hog and believe you’re very kind, very generous, very witty, very honorable, very trustworthy, etc...
Arrogance and a pervasive positive self-image are strong signals of high status. People will respond positively to them.
It would probably be better for our civilization IMHO if individuals were much less arrogant and much less self-confident. Existential risks for example would probably be lower IMHO if the scientists and technologists in certain fields were less confident of the moral goodness of their actions and their skill at avoiding terrible mistakes. And risks would be reduced if their opinion of their own status (which of course is highly correlated with their actual status) were lower since lower-status people spend more time doubting the goodness or rightness of their effects on the world and IMHO are less prone to rationalization. It is hard to change the current over-confident equilibrium however because low-confidence individuals are at a competitive disadvantage at obtaining the resources (e.g., education, jobs, connections) needed to gain influence in our civilization.
[Two sentence that go way off on a tangent deleted because now that the parent comment has been deleted, they make no sense.]
A person who has a more realistic self-image than average might appear less nice than an average person who is equally nice. Thus, the choice to improve your epistemic rationality also causes you to implicitly lie to people you interact with about you being a less nice person than you actually are.
I understand your first sentence, and agree ceteris paribus (but I think the person with the realistic beliefs is in a better position to become actually nicer). Your second makes no sense to me. How is it implicitly lying to have accurate beliefs about how nice you are? The other way around seems more plausible.
The improved accuracy is the property of your own beliefs about yourself, not of other people’s beliefs about you. By increasing the accuracy of your beliefs about yourself, you simultaneously decrease the accuracy of other people’s beliefs about yourself (unless you compensate by additional signalling by other means, which may be impossible in a number of cases). Consciously compromising accuracy of other people’s beliefs is usually called lying, or at least not technically lying.
I think that may be the most roundabout and head-spinny justification for self-deception I’ve ever heard. Wow. By a similar token, should I not take up gardening if it’s not within my power to update everyone who has the belief that I don’t garden?
I think that may be the most roundabout and head-spinny justification for self-deception I’ve ever heard.
Note that I don’t endorse self-deception, see my other comment in this thread. But the argument points to a negative trait of the choice. (The argument is related to a stance that as a rationalist, you’d want to use rhetoric as much as is common (but not more), to avoid signaling the incorrect fact of weakness of your position.)
By a similar token, should I not take up gardening if it’s not within my power to update everyone who has the belief that I don’t garden?
Normally, if you take up gardening, other people’s level of belief will either be unchanged (prior state of knowledge: they don’t have new evidence), or will move up (towards the truth) upon receiving new evidence. Here, the situation is reversed: new evidence (not new action—this is a point where your analogy breaks) will move people’s belief away from the truth.
Not acting on reasons to be epistemically irrational seems like a good injunction. It however shouldn’t prevent people from considering whether a given way of being epistemically irrational is instrumentally rational. Injunctions themselves are a way of guarding instrumental rationality from misguided acts of epistemic rationality. In this case, the principle is applied in reverse.
On the other hand, the reason self-deception evolved was so that you could effectively signal and lie to others about your abilities. It might be a good idea to read up on only the positive traits that you might have.
In fact, if there ever was a place where you want to deliberately indulge in epistemic irrationality, this is probably it.
Do you have something to protect, by any chance?
Presumably I value social status enough that I’m not prepared to trade it off for a small increase in epistemic rationality.
I’m curious as to the unexplained downvotes. Do people simply not like the idea that it might be instrumentally rational to be epistemically irrational in certain cases?
Note that we seem to have a case that the reason that this particular bias evolved is because it was selected for, specifically because it increased one’s ability to attract mates and allies. Of course, there might be benefits. But I don’t see a compelling case that they outweigh the costs.
So, if you want to be more likely to be single and lonely, go ahead and debais…
You could say of any systematic bias in humans that it evolved, specifically because it increased one’s ability to survive, reproduce, or both. If this is your true rejection, why do you not run screaming from this site, “They want me to de-bias in ways such that that, had the biases not been productive in the ancestral environment, I’d already be de-biased!”
Good point.
I think this is a case for Ord and Bostrom’s “Wisdom of Nature” heuristic.
Many cognitive biases arise from an approximation—some cheap and dirty trick—that held true enough in our EEA, but doesn’t now. For example, probability neglect, representativeness heuristic, short time horizon etc. These you want to debias.
Others arise from selective pressures that are very much alive and kicking. It seems that human social interaction has changed back to being more like the stone age in our modern society, except with much less murder. It seems to me that people very much play the same signalling games they used to play, and having positive self-beliefs seems like a good way to win at them.
The litany of Tarski is indeed a powerful principle, but this is exactly the kind of misuse of it that will cheapen it.
Bostrom and Sandberg (in your linked paper) suggest three reasons why we might want to change the design that evolution gave us:
Changed tradeoffs. We no longer live in the ancestral environment.
Value discordance. Evolution’s goal may not match our own.
Evolutionary restrictions. We might have tools that were not available to evolution.
On #2, I’ll note that evolution designed humans as temporary vessels, for the goal of propagating genes. Not, for example, for the goal of making you happy. You may prefer to hijack evolution’s design, in service of your own goals, rather than in service of your gene’s reproduction.
Lots of evolution’s adaptations (including many of the biases we discuss) are good for the propagation of the genes, at the cost of being bad for the individual human who suffers the bias. A self-aware human may wish to choose to reverse that tradeoff.
Surely having accurate positive self-beliefs is a win over having inaccurate positive self-beliefs, even if having inaccurate positive self-beliefs is a loss compared to having accurate negative self-beliefs. I don’t suggest that you should become luminous enough to say, “Wow, I suck in the following ways!” and then quit.
Sorry, I don’t get why? Why not just set all of your self-beliefs to “strongly positive”, to the extent that you can get away with it?
The criterion of instrumental optimality regarding personality self-beliefs is in conflict with the epistemic one. Why not just go the whole hog and believe you’re very kind, very generous, very witty, very honorable, very trustworthy, etc…
It’s possible, although seems unlikely on priors, that I’m relatively unusual in preferring that I actually be nice/smart/reasonable/friendly/etc. over preferring that I think that I’m those things. This seems to me much like preferring that my family be actually alive and well, over my merely thinking that they are alive and well.
From a purely practical standpoint, people might notice if you actually have negative personal traits, even if you signal not having them relatively well due to your positive self-image. They will then think you are an arrogant, deluded person (who also has whatever negative traits you are trying to signal away.)
I think that you have a fundamentally flawed model of most other humans. You are modeling them as reasoning engines that reason logically from explicitly stated ethical principles.
I prefer to model people as adaptation executors who respond to subcommunications and signals in a way that was optimized by evolution, and then, if asked, confabulate verbal rationalizations for their behavior.
Arrogance and a pervasive positive self-image are strong signals of high status. People will respond positively to them. It is possible to push arrogance too far, especially it is too negative, resentful, and backed by an attitude of hating other people. This is because it signals lower status—high status people generally like others. But just a good deal of self-assurance, unshakable self confidence etc are good.
Have you ever met one of those people who tells bad jokes all the time? This seems an quintessential example of someone with a strong false positive self-image.
Confucious says: man who tell bad jokes is never laughed at.
What predictions does this model let you make? When have you seen it compellingly confirmed in situations where other models would have had you predict something else? It sounds dangerously vulnerable to epicyclic adaptation to individual cases that don’t align with it.
The ‘fake it until you make it’ school of self-improvement is based around this kind of model. For example, if you want to be a self-confident person and derive the benefits of self-confidence, start out ‘faking’ self-confidence and mimicking the behaviours and signals of self-confident people. Other people will generally respond to this as they would respond to someone who is ‘actually’ self confident and a virtuous circle will result in you eventually not having to fake the self confidence any more.
A prediction of this kind of model might therefore be that the best way to improve self-confidence is to consciously mimic the behaviours of self confident individuals rather than to try and ‘internally’ improve your self confidence. Anecdotally I see some evidence that this works but I also see some evidence that evolution has made people better at detecting fakers than a naive version of the model might suppose.
If you understand the subconscious mechanisms and how they were tuned to the old environment, and how the old differs from the new, you will eventually see better hacks.
I’m not going to talk about many of those here because I tried before and it went badly.
As is the other model: the one where you model them as reasoning engines that reason logically from explicitly stated ethical principles. Here, you can just keep varying which of the many principles they are supposed to be following (as human commonsense morality contains so many different and mutually incompatible principles, so many circumstances, weaknesses of will, etc).
There are some solid experiments, e.g. moral dumbfounding, that back this up. Also, as soon as you expose people to a correct contrarian idea, you’ll see the people attack with a torrent of confabulated excuses.
I am quite fond of this model of people: I think it should be used more. Though agreed that we should test it, criticize it, etc.
It would probably be better for our civilization IMHO if individuals were much less arrogant and much less self-confident. Existential risks for example would probably be lower IMHO if the scientists and technologists in certain fields were less confident of the moral goodness of their actions and their skill at avoiding terrible mistakes. And risks would be reduced if their opinion of their own status (which of course is highly correlated with their actual status) were lower since lower-status people spend more time doubting the goodness or rightness of their effects on the world and IMHO are less prone to rationalization. It is hard to change the current over-confident equilibrium however because low-confidence individuals are at a competitive disadvantage at obtaining the resources (e.g., education, jobs, connections) needed to gain influence in our civilization.
[Two sentence that go way off on a tangent deleted because now that the parent comment has been deleted, they make no sense.]
A person who has a more realistic self-image than average might appear less nice than an average person who is equally nice. Thus, the choice to improve your epistemic rationality also causes you to implicitly lie to people you interact with about you being a less nice person than you actually are.
I understand your first sentence, and agree ceteris paribus (but I think the person with the realistic beliefs is in a better position to become actually nicer). Your second makes no sense to me. How is it implicitly lying to have accurate beliefs about how nice you are? The other way around seems more plausible.
The improved accuracy is the property of your own beliefs about yourself, not of other people’s beliefs about you. By increasing the accuracy of your beliefs about yourself, you simultaneously decrease the accuracy of other people’s beliefs about yourself (unless you compensate by additional signalling by other means, which may be impossible in a number of cases). Consciously compromising accuracy of other people’s beliefs is usually called lying, or at least not technically lying.
I think that may be the most roundabout and head-spinny justification for self-deception I’ve ever heard. Wow. By a similar token, should I not take up gardening if it’s not within my power to update everyone who has the belief that I don’t garden?
Note that I don’t endorse self-deception, see my other comment in this thread. But the argument points to a negative trait of the choice. (The argument is related to a stance that as a rationalist, you’d want to use rhetoric as much as is common (but not more), to avoid signaling the incorrect fact of weakness of your position.)
Normally, if you take up gardening, other people’s level of belief will either be unchanged (prior state of knowledge: they don’t have new evidence), or will move up (towards the truth) upon receiving new evidence. Here, the situation is reversed: new evidence (not new action—this is a point where your analogy breaks) will move people’s belief away from the truth.
I think the idea is to have both accurate and inaccurate positive self-beliefs, and no negative self-beliefs, accurate or otherwise.
Whether this is desirable or even possible I take no stance.
Not acting on reasons to be epistemically irrational seems like a good injunction. It however shouldn’t prevent people from considering whether a given way of being epistemically irrational is instrumentally rational. Injunctions themselves are a way of guarding instrumental rationality from misguided acts of epistemic rationality. In this case, the principle is applied in reverse.