Otherwise don’t worry, I’ve hanged out here for ages and I still need to update my cache of terms quite often. If you have questions about either after reading the wiki please feel free to ask since there are people much more knowledgeable than me that will probably answer them. I don’t know if other users agree, but the Discussion section seems like the best place to ask questions that might have been already covered elsewhere for people who have trouble despite extensive study, in a way this OP is basically an example of this.
I’m also making the following assumptions:
People are more often rational than otherwise when the rational answer happens to say good things about them.
I hope people here agree that learning to be more rational will necessarily at least in some areas change your beliefs (unlikely any one person is right about everything).
If one hasn’t changed any beliefs its likely that they haven’t employed rationality where it would do them the most good.
People are better at defending or promoting a position when they think its true.
From the above points it seems to follow that people who become more rational than average will send also more bad signals than they would if they hadn’t become more rational.
Most people would try to avoid 5., it represents a disincentive for becoming more rational.
The main question of this thread:
How can one work around 5. without employing Dark Arts to sanitize the feelings accompanying a conclusion? Is it even possible? Can or should we talk about this and try to find and catalogue ways to do this since many of us are not skilled at social interactions (higher than average self identified non-neurotypicals visit LW)?
Notes:
- I also wish to emphasise that not only do some conclusions send bad signals, wanting to open *some* topics to rational inquiry in itself often sends bad signals even if you do eventually end up with a conclusion that sends good signals.
- I feel that, even if it isn’t possible to hide bad signalling, the better map of reality one enjoys will off set these costs in other ways. Despite this, considering we are social animals I think many people would like to avoid this particular cost quite strongly, myself included.
Trying to hide bad signaling? To the Dark Side, lead you it will.
Edit: This is old material. It may be out of date.
Or is that just a point of view?
I’m going to assume familiary with the common use of the following two terms on this site:
Dark Arts
Signaling
Otherwise don’t worry, I’ve hanged out here for ages and I still need to update my cache of terms quite often. If you have questions about either after reading the wiki please feel free to ask since there are people much more knowledgeable than me that will probably answer them. I don’t know if other users agree, but the Discussion section seems like the best place to ask questions that might have been already covered elsewhere for people who have trouble despite extensive study, in a way this OP is basically an example of this.
I’m also making the following assumptions:
People are more often rational than otherwise when the rational answer happens to say good things about them.
I hope people here agree that learning to be more rational will necessarily at least in some areas change your beliefs (unlikely any one person is right about everything).
If one hasn’t changed any beliefs its likely that they haven’t employed rationality where it would do them the most good.
People are better at defending or promoting a position when they think its true.
From the above points it seems to follow that people who become more rational than average will send also more bad signals than they would if they hadn’t become more rational.
Most people would try to avoid 5., it represents a disincentive for becoming more rational.
The main question of this thread:
How can one work around 5. without employing Dark Arts to sanitize the feelings accompanying a conclusion? Is it even possible? Can or should we talk about this and try to find and catalogue ways to do this since many of us are not skilled at social interactions (higher than average self identified non-neurotypicals visit LW)?
Notes:
- I also wish to emphasise that not only do some conclusions send bad signals, wanting to open *some* topics to rational inquiry in itself often sends bad signals even if you do eventually end up with a conclusion that sends good signals.
- I feel that, even if it isn’t possible to hide bad signalling, the better map of reality one enjoys will off set these costs in other ways. Despite this, considering we are social animals I think many people would like to avoid this particular cost quite strongly, myself included.