Sometimes we might really, actually be over-analyzing things, and what our true goals are may be better discovered by paying more attention to what System 1 is informing us of. If we don’t figure this out for ourselves, it might be other rational people who tell us about this. If someone says:
“If you’re trying to solve this problem, I believe you’re over-analyzing it. Try paying more attention to your feelings, as they might indicate what you really want to do.”
how are we supposed to tell if what they’re saying is a:
someone trying to genuinely help us solve our problem(s) in a rational way
or
someone dismissing attempts at analyzing a problem at all?
It can only be one, or the other. Now, someone might not have read the Less Wrong, but that doesn’t preclude them from noticing when we really are over-analyzing a problem. When someone responds like this, how are we supposed to tell if they’re just strawmanning rationality, or really trying to help us achieve a more rational response?
This isn’t some rhetorical question for you. I’ve got the same concerns as you, and I’m not sure how to ask this particular question better. Is it a non-issue? Am I using confusing terms?
I like the exporation of how emotions interact with rationality that seems to be going on over there.
For me over-analysis would be where further analysis is unlikely to yield practically improved knowledge of options to solve the problem at hand. I’d probably treat this as quite separate from bad analysis or the information supplied by instinct and emotion. In a sense then emotions wouldn’t come to bear on the question of over-analysis generally. However, I’d heartily agree with the proposition that emotions are a good topic of exploration and study because they provide good option selection in certain situations and because knowledge of them might help control and account for emotionally based cognitive bias.
I guess the above would inform the question of whether the person you describe is rationally helping or just strawmanning. My sense is that in many cases the term is thrown around as a kind of defence against the mental discomfort that deep thought and the changing of ideas might bring, but perhaps I’m being too judgemental. Other times of course the person is actually identifying hand-wringing and inaction that we’re too oblivious to identify ourselves.
In terms of identification of true goals, I wonder if the contextuality and changability of emotion would render it a relevent but ultimately unreliable source of deriving true goals. For example, in a fierce conflict its fairly tempting to perceive your goals as fundamentally opposed or opposite to your opponents, but I wonder if that’s really a good position to form.
In the end though, people’s emotions are relevent in their perception of their goals, so I suspect we do have to address emotions in the case for rationality.
Does CFAR have its own discussion forum? I can’t see any so far? Do you know what CFAR thinks about the “winning” approach held by many LWers?
CFAR has its own private mailing list, which isn’t available to individuals who haven’t attended a CAR event before. As a CFAR alumnus, though, I can ask them your questions on your behalf. If I get a sufficient response, I can summarize their insight in a discussion post. I believe CFAR alumni are 40% active Less Wrong users, and 60% not. The base of CFAR, i.e. its staff, may have a substantially different perspective from its hundreds of workshop members that compose the broader community.
I think I’d be quite interested to know what % of CRAF people believe that rationality ought to include a component of “truthiness”. Anything that could help on that?
I don’t 100% follow your comment, but I find the content of those links interesting. Care to expand on that thought at all?
Sometimes we might really, actually be over-analyzing things, and what our true goals are may be better discovered by paying more attention to what System 1 is informing us of. If we don’t figure this out for ourselves, it might be other rational people who tell us about this. If someone says:
how are we supposed to tell if what they’re saying is a:
someone trying to genuinely help us solve our problem(s) in a rational way
or
someone dismissing attempts at analyzing a problem at all?
It can only be one, or the other. Now, someone might not have read the Less Wrong, but that doesn’t preclude them from noticing when we really are over-analyzing a problem. When someone responds like this, how are we supposed to tell if they’re just strawmanning rationality, or really trying to help us achieve a more rational response?
This isn’t some rhetorical question for you. I’ve got the same concerns as you, and I’m not sure how to ask this particular question better. Is it a non-issue? Am I using confusing terms?
I like the exporation of how emotions interact with rationality that seems to be going on over there.
For me over-analysis would be where further analysis is unlikely to yield practically improved knowledge of options to solve the problem at hand. I’d probably treat this as quite separate from bad analysis or the information supplied by instinct and emotion. In a sense then emotions wouldn’t come to bear on the question of over-analysis generally. However, I’d heartily agree with the proposition that emotions are a good topic of exploration and study because they provide good option selection in certain situations and because knowledge of them might help control and account for emotionally based cognitive bias.
I guess the above would inform the question of whether the person you describe is rationally helping or just strawmanning. My sense is that in many cases the term is thrown around as a kind of defence against the mental discomfort that deep thought and the changing of ideas might bring, but perhaps I’m being too judgemental. Other times of course the person is actually identifying hand-wringing and inaction that we’re too oblivious to identify ourselves.
In terms of identification of true goals, I wonder if the contextuality and changability of emotion would render it a relevent but ultimately unreliable source of deriving true goals. For example, in a fierce conflict its fairly tempting to perceive your goals as fundamentally opposed or opposite to your opponents, but I wonder if that’s really a good position to form.
In the end though, people’s emotions are relevent in their perception of their goals, so I suspect we do have to address emotions in the case for rationality.
Does CFAR have its own discussion forum? I can’t see any so far? Do you know what CFAR thinks about the “winning” approach held by many LWers?
CFAR has its own private mailing list, which isn’t available to individuals who haven’t attended a CAR event before. As a CFAR alumnus, though, I can ask them your questions on your behalf. If I get a sufficient response, I can summarize their insight in a discussion post. I believe CFAR alumni are 40% active Less Wrong users, and 60% not. The base of CFAR, i.e. its staff, may have a substantially different perspective from its hundreds of workshop members that compose the broader community.
I think I’d be quite interested to know what % of CRAF people believe that rationality ought to include a component of “truthiness”. Anything that could help on that?