It’s possible, although seems unlikely on priors, that I’m relatively unusual in preferring that I actually be nice/smart/reasonable/friendly/etc. over preferring that I think that I’m those things. This seems to me much like preferring that my family be actually alive and well, over my merely thinking that they are alive and well.
From a purely practical standpoint, people might notice if you actually have negative personal traits, even if you signal not having them relatively well due to your positive self-image. They will then think you are an arrogant, deluded person (who also has whatever negative traits you are trying to signal away.)
They will then think you are an arrogant, deluded person
I think that you have a fundamentally flawed model of most other humans. You are modeling them as reasoning engines that reason logically from explicitly stated ethical principles.
I prefer to model people as adaptation executors who respond to subcommunications and signals in a way that was optimized by evolution, and then, if asked, confabulate verbal rationalizations for their behavior.
Arrogance and a pervasive positive self-image are strong signals of high status. People will respond positively to them. It is possible to push arrogance too far, especially it is too negative, resentful, and backed by an attitude of hating other people. This is because it signals lower status—high status people generally like others. But just a good deal of self-assurance, unshakable self confidence etc are good.
They will then think you are an arrogant, deluded person
I think that you have a fundamentally flawed model of most other humans. You are modeling them as reasoning engines that reason logically from explicitly stated ethical principles.
Have you ever met one of those people who tells bad jokes all the time? This seems an quintessential example of someone with a strong false positive self-image.
I prefer to model people as adaptation executors who respond to subcommunications and signals in a way that was optimized by evolution, and then, if asked, confabulate verbal rationalizations for their behavior.
What predictions does this model let you make? When have you seen it compellingly confirmed in situations where other models would have had you predict something else? It sounds dangerously vulnerable to epicyclic adaptation to individual cases that don’t align with it.
The ‘fake it until you make it’ school of self-improvement is based around this kind of model. For example, if you want to be a self-confident person and derive the benefits of self-confidence, start out ‘faking’ self-confidence and mimicking the behaviours and signals of self-confident people. Other people will generally respond to this as they would respond to someone who is ‘actually’ self confident and a virtuous circle will result in you eventually not having to fake the self confidence any more.
A prediction of this kind of model might therefore be that the best way to improve self-confidence is to consciously mimic the behaviours of self confident individuals rather than to try and ‘internally’ improve your self confidence. Anecdotally I see some evidence that this works but I also see some evidence that evolution has made people better at detecting fakers than a naive version of the model might suppose.
If you understand the subconscious mechanisms and how they were tuned to the old environment, and how the old differs from the new, you will eventually see better hacks.
I’m not going to talk about many of those here because I tried before and it went badly.
It sounds dangerously vulnerable to epicyclic adaptation to individual cases that don’t align with it.
As is the other model: the one where you model them as reasoning engines that reason logically from explicitly stated ethical principles. Here, you can just keep varying which of the many principles they are supposed to be following (as human commonsense morality contains so many different and mutually incompatible principles, so many circumstances, weaknesses of will, etc).
There are some solid experiments, e.g. moral dumbfounding, that back this up. Also, as soon as you expose people to a correct contrarian idea, you’ll see the people attack with a torrent of confabulated excuses.
I am quite fond of this model of people: I think it should be used more. Though agreed that we should test it, criticize it, etc.
Why not just set all of your self-beliefs to “strongly positive”, to the extent that you can get away with it? . . . Why not just go the whole hog and believe you’re very kind, very generous, very witty, very honorable, very trustworthy, etc...
Arrogance and a pervasive positive self-image are strong signals of high status. People will respond positively to them.
It would probably be better for our civilization IMHO if individuals were much less arrogant and much less self-confident. Existential risks for example would probably be lower IMHO if the scientists and technologists in certain fields were less confident of the moral goodness of their actions and their skill at avoiding terrible mistakes. And risks would be reduced if their opinion of their own status (which of course is highly correlated with their actual status) were lower since lower-status people spend more time doubting the goodness or rightness of their effects on the world and IMHO are less prone to rationalization. It is hard to change the current over-confident equilibrium however because low-confidence individuals are at a competitive disadvantage at obtaining the resources (e.g., education, jobs, connections) needed to gain influence in our civilization.
[Two sentence that go way off on a tangent deleted because now that the parent comment has been deleted, they make no sense.]
A person who has a more realistic self-image than average might appear less nice than an average person who is equally nice. Thus, the choice to improve your epistemic rationality also causes you to implicitly lie to people you interact with about you being a less nice person than you actually are.
I understand your first sentence, and agree ceteris paribus (but I think the person with the realistic beliefs is in a better position to become actually nicer). Your second makes no sense to me. How is it implicitly lying to have accurate beliefs about how nice you are? The other way around seems more plausible.
The improved accuracy is the property of your own beliefs about yourself, not of other people’s beliefs about you. By increasing the accuracy of your beliefs about yourself, you simultaneously decrease the accuracy of other people’s beliefs about yourself (unless you compensate by additional signalling by other means, which may be impossible in a number of cases). Consciously compromising accuracy of other people’s beliefs is usually called lying, or at least not technically lying.
I think that may be the most roundabout and head-spinny justification for self-deception I’ve ever heard. Wow. By a similar token, should I not take up gardening if it’s not within my power to update everyone who has the belief that I don’t garden?
I think that may be the most roundabout and head-spinny justification for self-deception I’ve ever heard.
Note that I don’t endorse self-deception, see my other comment in this thread. But the argument points to a negative trait of the choice. (The argument is related to a stance that as a rationalist, you’d want to use rhetoric as much as is common (but not more), to avoid signaling the incorrect fact of weakness of your position.)
By a similar token, should I not take up gardening if it’s not within my power to update everyone who has the belief that I don’t garden?
Normally, if you take up gardening, other people’s level of belief will either be unchanged (prior state of knowledge: they don’t have new evidence), or will move up (towards the truth) upon receiving new evidence. Here, the situation is reversed: new evidence (not new action—this is a point where your analogy breaks) will move people’s belief away from the truth.
It’s possible, although seems unlikely on priors, that I’m relatively unusual in preferring that I actually be nice/smart/reasonable/friendly/etc. over preferring that I think that I’m those things. This seems to me much like preferring that my family be actually alive and well, over my merely thinking that they are alive and well.
From a purely practical standpoint, people might notice if you actually have negative personal traits, even if you signal not having them relatively well due to your positive self-image. They will then think you are an arrogant, deluded person (who also has whatever negative traits you are trying to signal away.)
I think that you have a fundamentally flawed model of most other humans. You are modeling them as reasoning engines that reason logically from explicitly stated ethical principles.
I prefer to model people as adaptation executors who respond to subcommunications and signals in a way that was optimized by evolution, and then, if asked, confabulate verbal rationalizations for their behavior.
Arrogance and a pervasive positive self-image are strong signals of high status. People will respond positively to them. It is possible to push arrogance too far, especially it is too negative, resentful, and backed by an attitude of hating other people. This is because it signals lower status—high status people generally like others. But just a good deal of self-assurance, unshakable self confidence etc are good.
Have you ever met one of those people who tells bad jokes all the time? This seems an quintessential example of someone with a strong false positive self-image.
Confucious says: man who tell bad jokes is never laughed at.
What predictions does this model let you make? When have you seen it compellingly confirmed in situations where other models would have had you predict something else? It sounds dangerously vulnerable to epicyclic adaptation to individual cases that don’t align with it.
The ‘fake it until you make it’ school of self-improvement is based around this kind of model. For example, if you want to be a self-confident person and derive the benefits of self-confidence, start out ‘faking’ self-confidence and mimicking the behaviours and signals of self-confident people. Other people will generally respond to this as they would respond to someone who is ‘actually’ self confident and a virtuous circle will result in you eventually not having to fake the self confidence any more.
A prediction of this kind of model might therefore be that the best way to improve self-confidence is to consciously mimic the behaviours of self confident individuals rather than to try and ‘internally’ improve your self confidence. Anecdotally I see some evidence that this works but I also see some evidence that evolution has made people better at detecting fakers than a naive version of the model might suppose.
If you understand the subconscious mechanisms and how they were tuned to the old environment, and how the old differs from the new, you will eventually see better hacks.
I’m not going to talk about many of those here because I tried before and it went badly.
As is the other model: the one where you model them as reasoning engines that reason logically from explicitly stated ethical principles. Here, you can just keep varying which of the many principles they are supposed to be following (as human commonsense morality contains so many different and mutually incompatible principles, so many circumstances, weaknesses of will, etc).
There are some solid experiments, e.g. moral dumbfounding, that back this up. Also, as soon as you expose people to a correct contrarian idea, you’ll see the people attack with a torrent of confabulated excuses.
I am quite fond of this model of people: I think it should be used more. Though agreed that we should test it, criticize it, etc.
It would probably be better for our civilization IMHO if individuals were much less arrogant and much less self-confident. Existential risks for example would probably be lower IMHO if the scientists and technologists in certain fields were less confident of the moral goodness of their actions and their skill at avoiding terrible mistakes. And risks would be reduced if their opinion of their own status (which of course is highly correlated with their actual status) were lower since lower-status people spend more time doubting the goodness or rightness of their effects on the world and IMHO are less prone to rationalization. It is hard to change the current over-confident equilibrium however because low-confidence individuals are at a competitive disadvantage at obtaining the resources (e.g., education, jobs, connections) needed to gain influence in our civilization.
[Two sentence that go way off on a tangent deleted because now that the parent comment has been deleted, they make no sense.]
A person who has a more realistic self-image than average might appear less nice than an average person who is equally nice. Thus, the choice to improve your epistemic rationality also causes you to implicitly lie to people you interact with about you being a less nice person than you actually are.
I understand your first sentence, and agree ceteris paribus (but I think the person with the realistic beliefs is in a better position to become actually nicer). Your second makes no sense to me. How is it implicitly lying to have accurate beliefs about how nice you are? The other way around seems more plausible.
The improved accuracy is the property of your own beliefs about yourself, not of other people’s beliefs about you. By increasing the accuracy of your beliefs about yourself, you simultaneously decrease the accuracy of other people’s beliefs about yourself (unless you compensate by additional signalling by other means, which may be impossible in a number of cases). Consciously compromising accuracy of other people’s beliefs is usually called lying, or at least not technically lying.
I think that may be the most roundabout and head-spinny justification for self-deception I’ve ever heard. Wow. By a similar token, should I not take up gardening if it’s not within my power to update everyone who has the belief that I don’t garden?
Note that I don’t endorse self-deception, see my other comment in this thread. But the argument points to a negative trait of the choice. (The argument is related to a stance that as a rationalist, you’d want to use rhetoric as much as is common (but not more), to avoid signaling the incorrect fact of weakness of your position.)
Normally, if you take up gardening, other people’s level of belief will either be unchanged (prior state of knowledge: they don’t have new evidence), or will move up (towards the truth) upon receiving new evidence. Here, the situation is reversed: new evidence (not new action—this is a point where your analogy breaks) will move people’s belief away from the truth.