There is much more to being agentic than nonconformity. I apologize for unusual rambliness of this post. I can highlight where I tried to express this:
Returning to the question of willingness to be weird: it is more a prerequisite for agency than the core definition. An agent who is trying to accomplish a goal as strategically as possible, running a new computation, and performing a search for the optimal plan for them—they simply don’t want to be restricted to any existing solutions. If an existing solution is the best, no problem, it’s just that you don’t want to throw out an optimal solution just because it’s unusual.
I would add that it seems like you’ve focused your entire thought process on the problem of how “rationality” works. And you’ve also discussed the problem of how you should get rationality done.
I am not sure to what extent you think that I can think of any reasonably useful and desirable parts of rationality which your proposal doesn’t actually consider.
For this to work, you need to have enough time (usually after you have a reasonable amount of experience) with other rationality techniques to learn what you have.
In order for that you must have some amount of background knowledge of how to actually implement your claims, that includes explicit communication, face-to-face communication, explicit communication about the thing that “seems” to match. If you have that amount of knowledge, you can easily have a problem with people being too shy to actually have the kind of conversations they like (e.g. PUA jargon).
And you must be somewhat lacking in social skills.
If you happen to be a little shy, then you’ll have a problem with people being overly shy.
I have the impression that people who can find a lot of social skills out of a group can often become very social and are unable to overcome the obstacles they face. (I could be too shy, but I’d really like a “how can you show that you won’t be shy without being afraid”?)
In short, people can easily be oblivious to social challenges for longer than they need to overcome them. For example, the first hit with a mouse at a bar is a challenge to overcome. The other person will give a lot of lectures in their bar and some social skills, although the most useful ones are the ones that create the social challenge for the other person.
While I acknowledge this, which I see as good advice, I don’t see why it should apply to everyone, or even the most powerful people. If, for instance, some people have social skills that are fairly rare, so that they’re not able to overcome their social skills, then that is a factor of a two.
I guess if you wanted to be successful as a social worker in a social setting, that could be more. If you wanted to be successful as a social worker in a social setting then you probably used more social skills than you needed, and that seems to be your excuse.
You say that people should not be allowed to have their preferences but that they should just have their beliefs (e.g., to have preferences is to have a larger utility)
But there are some important questions which I think are not answered here:
Does this apply to a human brain [1], or to any other species [2] that we aren’t part of? [...]
[...]
In what sense does our decisions make sense if we don’t have a conscious mind?
The problem of having a conscious mind seems like the single most useful thing. The whole “be a conscious being” aspect seems very useful compared to the huge gap between conscious and unconscious minds which otherwise seem to be something like how the brain doesn’t have a conscious mind, but is pretty pretty far off from it.
Of course, you could also try other approaches. Your mind could be a computer or a CPU, or you could try some different approach.
I suggest that maybe having one mental brain that does the opposite something is more useful than one mental brain that “does both things”.
There is much more to being agentic than nonconformity. I apologize for unusual rambliness of this post. I can highlight where I tried to express this:
I would add that it seems like you’ve focused your entire thought process on the problem of how “rationality” works. And you’ve also discussed the problem of how you should get rationality done.
I am not sure to what extent you think that I can think of any reasonably useful and desirable parts of rationality which your proposal doesn’t actually consider.
+1 good summary. I mean, you can always set a five minute timer if you want to think of more reasonably useful and desirable parts of rationality.
For this to work, you need to have enough time (usually after you have a reasonable amount of experience) with other rationality techniques to learn what you have.
In order for that you must have some amount of background knowledge of how to actually implement your claims, that includes explicit communication, face-to-face communication, explicit communication about the thing that “seems” to match. If you have that amount of knowledge, you can easily have a problem with people being too shy to actually have the kind of conversations they like (e.g. PUA jargon).
And you must be somewhat lacking in social skills.
If you happen to be a little shy, then you’ll have a problem with people being overly shy.
I have the impression that people who can find a lot of social skills out of a group can often become very social and are unable to overcome the obstacles they face. (I could be too shy, but I’d really like a “how can you show that you won’t be shy without being afraid”?)
In short, people can easily be oblivious to social challenges for longer than they need to overcome them. For example, the first hit with a mouse at a bar is a challenge to overcome. The other person will give a lot of lectures in their bar and some social skills, although the most useful ones are the ones that create the social challenge for the other person.
While I acknowledge this, which I see as good advice, I don’t see why it should apply to everyone, or even the most powerful people. If, for instance, some people have social skills that are fairly rare, so that they’re not able to overcome their social skills, then that is a factor of a two.
I guess if you wanted to be successful as a social worker in a social setting, that could be more. If you wanted to be successful as a social worker in a social setting then you probably used more social skills than you needed, and that seems to be your excuse.
I mean, yeah, agents (like everyone) benefit from social skills.
You say that people should not be allowed to have their preferences but that they should just have their beliefs (e.g., to have preferences is to have a larger utility)
But there are some important questions which I think are not answered here:
Does this apply to a human brain [1], or to any other species [2] that we aren’t part of? [...]
[...]
In what sense does our decisions make sense if we don’t have a conscious mind?
Too real, GPT2, too real.
The problem of having a conscious mind seems like the single most useful thing. The whole “be a conscious being” aspect seems very useful compared to the huge gap between conscious and unconscious minds which otherwise seem to be something like how the brain doesn’t have a conscious mind, but is pretty pretty far off from it.
Of course, you could also try other approaches. Your mind could be a computer or a CPU, or you could try some different approach.
I suggest that maybe having one mental brain that does the opposite something is more useful than one mental brain that “does both things”.
This seems relevant to Qiaochu’s interests.
I can’t think of anything else that is more relevant to Qiaochu and Qiaochu.