For this to work, you need to have enough time (usually after you have a reasonable amount of experience) with other rationality techniques to learn what you have.
In order for that you must have some amount of background knowledge of how to actually implement your claims, that includes explicit communication, face-to-face communication, explicit communication about the thing that “seems” to match. If you have that amount of knowledge, you can easily have a problem with people being too shy to actually have the kind of conversations they like (e.g. PUA jargon).
And you must be somewhat lacking in social skills.
If you happen to be a little shy, then you’ll have a problem with people being overly shy.
I have the impression that people who can find a lot of social skills out of a group can often become very social and are unable to overcome the obstacles they face. (I could be too shy, but I’d really like a “how can you show that you won’t be shy without being afraid”?)
In short, people can easily be oblivious to social challenges for longer than they need to overcome them. For example, the first hit with a mouse at a bar is a challenge to overcome. The other person will give a lot of lectures in their bar and some social skills, although the most useful ones are the ones that create the social challenge for the other person.
While I acknowledge this, which I see as good advice, I don’t see why it should apply to everyone, or even the most powerful people. If, for instance, some people have social skills that are fairly rare, so that they’re not able to overcome their social skills, then that is a factor of a two.
I guess if you wanted to be successful as a social worker in a social setting, that could be more. If you wanted to be successful as a social worker in a social setting then you probably used more social skills than you needed, and that seems to be your excuse.
You say that people should not be allowed to have their preferences but that they should just have their beliefs (e.g., to have preferences is to have a larger utility)
But there are some important questions which I think are not answered here:
Does this apply to a human brain [1], or to any other species [2] that we aren’t part of? [...]
[...]
In what sense does our decisions make sense if we don’t have a conscious mind?
The problem of having a conscious mind seems like the single most useful thing. The whole “be a conscious being” aspect seems very useful compared to the huge gap between conscious and unconscious minds which otherwise seem to be something like how the brain doesn’t have a conscious mind, but is pretty pretty far off from it.
Of course, you could also try other approaches. Your mind could be a computer or a CPU, or you could try some different approach.
I suggest that maybe having one mental brain that does the opposite something is more useful than one mental brain that “does both things”.
+1 good summary. I mean, you can always set a five minute timer if you want to think of more reasonably useful and desirable parts of rationality.
For this to work, you need to have enough time (usually after you have a reasonable amount of experience) with other rationality techniques to learn what you have.
In order for that you must have some amount of background knowledge of how to actually implement your claims, that includes explicit communication, face-to-face communication, explicit communication about the thing that “seems” to match. If you have that amount of knowledge, you can easily have a problem with people being too shy to actually have the kind of conversations they like (e.g. PUA jargon).
And you must be somewhat lacking in social skills.
If you happen to be a little shy, then you’ll have a problem with people being overly shy.
I have the impression that people who can find a lot of social skills out of a group can often become very social and are unable to overcome the obstacles they face. (I could be too shy, but I’d really like a “how can you show that you won’t be shy without being afraid”?)
In short, people can easily be oblivious to social challenges for longer than they need to overcome them. For example, the first hit with a mouse at a bar is a challenge to overcome. The other person will give a lot of lectures in their bar and some social skills, although the most useful ones are the ones that create the social challenge for the other person.
While I acknowledge this, which I see as good advice, I don’t see why it should apply to everyone, or even the most powerful people. If, for instance, some people have social skills that are fairly rare, so that they’re not able to overcome their social skills, then that is a factor of a two.
I guess if you wanted to be successful as a social worker in a social setting, that could be more. If you wanted to be successful as a social worker in a social setting then you probably used more social skills than you needed, and that seems to be your excuse.
I mean, yeah, agents (like everyone) benefit from social skills.
You say that people should not be allowed to have their preferences but that they should just have their beliefs (e.g., to have preferences is to have a larger utility)
But there are some important questions which I think are not answered here:
Does this apply to a human brain [1], or to any other species [2] that we aren’t part of? [...]
[...]
In what sense does our decisions make sense if we don’t have a conscious mind?
Too real, GPT2, too real.
The problem of having a conscious mind seems like the single most useful thing. The whole “be a conscious being” aspect seems very useful compared to the huge gap between conscious and unconscious minds which otherwise seem to be something like how the brain doesn’t have a conscious mind, but is pretty pretty far off from it.
Of course, you could also try other approaches. Your mind could be a computer or a CPU, or you could try some different approach.
I suggest that maybe having one mental brain that does the opposite something is more useful than one mental brain that “does both things”.
This seems relevant to Qiaochu’s interests.
I can’t think of anything else that is more relevant to Qiaochu and Qiaochu.