I prefer to model people as adaptation executors who respond to subcommunications and signals in a way that was optimized by evolution, and then, if asked, confabulate verbal rationalizations for their behavior.
What predictions does this model let you make? When have you seen it compellingly confirmed in situations where other models would have had you predict something else? It sounds dangerously vulnerable to epicyclic adaptation to individual cases that don’t align with it.
The ‘fake it until you make it’ school of self-improvement is based around this kind of model. For example, if you want to be a self-confident person and derive the benefits of self-confidence, start out ‘faking’ self-confidence and mimicking the behaviours and signals of self-confident people. Other people will generally respond to this as they would respond to someone who is ‘actually’ self confident and a virtuous circle will result in you eventually not having to fake the self confidence any more.
A prediction of this kind of model might therefore be that the best way to improve self-confidence is to consciously mimic the behaviours of self confident individuals rather than to try and ‘internally’ improve your self confidence. Anecdotally I see some evidence that this works but I also see some evidence that evolution has made people better at detecting fakers than a naive version of the model might suppose.
If you understand the subconscious mechanisms and how they were tuned to the old environment, and how the old differs from the new, you will eventually see better hacks.
I’m not going to talk about many of those here because I tried before and it went badly.
It sounds dangerously vulnerable to epicyclic adaptation to individual cases that don’t align with it.
As is the other model: the one where you model them as reasoning engines that reason logically from explicitly stated ethical principles. Here, you can just keep varying which of the many principles they are supposed to be following (as human commonsense morality contains so many different and mutually incompatible principles, so many circumstances, weaknesses of will, etc).
There are some solid experiments, e.g. moral dumbfounding, that back this up. Also, as soon as you expose people to a correct contrarian idea, you’ll see the people attack with a torrent of confabulated excuses.
I am quite fond of this model of people: I think it should be used more. Though agreed that we should test it, criticize it, etc.
Why not just set all of your self-beliefs to “strongly positive”, to the extent that you can get away with it? . . . Why not just go the whole hog and believe you’re very kind, very generous, very witty, very honorable, very trustworthy, etc...
Arrogance and a pervasive positive self-image are strong signals of high status. People will respond positively to them.
It would probably be better for our civilization IMHO if individuals were much less arrogant and much less self-confident. Existential risks for example would probably be lower IMHO if the scientists and technologists in certain fields were less confident of the moral goodness of their actions and their skill at avoiding terrible mistakes. And risks would be reduced if their opinion of their own status (which of course is highly correlated with their actual status) were lower since lower-status people spend more time doubting the goodness or rightness of their effects on the world and IMHO are less prone to rationalization. It is hard to change the current over-confident equilibrium however because low-confidence individuals are at a competitive disadvantage at obtaining the resources (e.g., education, jobs, connections) needed to gain influence in our civilization.
[Two sentence that go way off on a tangent deleted because now that the parent comment has been deleted, they make no sense.]
What predictions does this model let you make? When have you seen it compellingly confirmed in situations where other models would have had you predict something else? It sounds dangerously vulnerable to epicyclic adaptation to individual cases that don’t align with it.
The ‘fake it until you make it’ school of self-improvement is based around this kind of model. For example, if you want to be a self-confident person and derive the benefits of self-confidence, start out ‘faking’ self-confidence and mimicking the behaviours and signals of self-confident people. Other people will generally respond to this as they would respond to someone who is ‘actually’ self confident and a virtuous circle will result in you eventually not having to fake the self confidence any more.
A prediction of this kind of model might therefore be that the best way to improve self-confidence is to consciously mimic the behaviours of self confident individuals rather than to try and ‘internally’ improve your self confidence. Anecdotally I see some evidence that this works but I also see some evidence that evolution has made people better at detecting fakers than a naive version of the model might suppose.
If you understand the subconscious mechanisms and how they were tuned to the old environment, and how the old differs from the new, you will eventually see better hacks.
I’m not going to talk about many of those here because I tried before and it went badly.
As is the other model: the one where you model them as reasoning engines that reason logically from explicitly stated ethical principles. Here, you can just keep varying which of the many principles they are supposed to be following (as human commonsense morality contains so many different and mutually incompatible principles, so many circumstances, weaknesses of will, etc).
There are some solid experiments, e.g. moral dumbfounding, that back this up. Also, as soon as you expose people to a correct contrarian idea, you’ll see the people attack with a torrent of confabulated excuses.
I am quite fond of this model of people: I think it should be used more. Though agreed that we should test it, criticize it, etc.
It would probably be better for our civilization IMHO if individuals were much less arrogant and much less self-confident. Existential risks for example would probably be lower IMHO if the scientists and technologists in certain fields were less confident of the moral goodness of their actions and their skill at avoiding terrible mistakes. And risks would be reduced if their opinion of their own status (which of course is highly correlated with their actual status) were lower since lower-status people spend more time doubting the goodness or rightness of their effects on the world and IMHO are less prone to rationalization. It is hard to change the current over-confident equilibrium however because low-confidence individuals are at a competitive disadvantage at obtaining the resources (e.g., education, jobs, connections) needed to gain influence in our civilization.
[Two sentence that go way off on a tangent deleted because now that the parent comment has been deleted, they make no sense.]