Jonah has recently been attempting to persuade Eliezer that Eliezer’s comparative advantage is not the FAI research that he is currently doing but instead doing (more) evangelism. Now we have a post explaining how status signalling related motivated cognition can cause people to overestimate the value of the altruistic efforts that they happen to personally have chosen. This is almost certainly true—typical human biases work like that in all similar areas so it would be startling to find that an activity so heavily evolutionarily entangled with signalling motives was somehow immune! I feel it is important, however, to at least make passing acknowledgement of the fact that this exhortation about motivated cognition is itself subject to motive.
Jonah himself acknowledges people are more likely to suggest motivated cognition as something that the other guy might be suffering from than to apply it to themselves. While in this case there is no overt claim like ”… and therefore you should believe the guy I was arguing with is biased and so agree with me instead” and I don’t believe Jonah intends anything so crude, the recent context does change the meaning of any given post—at least the context and expected social influence of a post influences how I personally evaluate contributions that I encounter and I currently do not label that habit of reading a bug.
To be clear the pattern “significant argument --(short time)--> post by one participant which points out a bias that the other participant may have” isn’t (always) a cause to reject the post. This one isn’t particularly objectionable (a tad obvious but that’s ok in discussion). Nevertheless I suggest that for the purpose of making the actual explicit point without distraction it may usually be best to keep such posts in draft form for a couple of weeks and post them later when the context loses relevance. Either that or include a lampshade or disclaimer regarding the relevance to the existing conversation. There is something about acting oblivious that invites scepticism.
In writing my post, I had a number of different examples in the back of my mind.
Even if I don’t think that MIRI’s current Friendly AI research is of high value, I believe that there are instances in which people have undervalued Eliezer’s holistic output for the reason that I describe in my post.
There’s a broader context that my post falls into: note that I’ve made 11 substantive posts over the past 2.5 weeks, about subjects ranging from GiveWell’s on climate change and meta-research, to effective philanthropy in general, to epistemology.
You may be right that I should be spacing my posts out in a different way, temporally.
I endorse the lampshade approach significantly more than the delay approach.
More generally, I endorse stating explicitly whatever motivational or cognitive biases may nonobviously be influencing my posting whenever doing so isn’t a significant fraction of the effort involved in the post.
For example, right now I suspect I’m being motivated by interpreting wedrifid’s comment as a relatively sophisticated way of taking sides in the Jonah/Eliezer discussion he references, and because power struggles make me anxious my instinct is to “go meta” and abstract this issue further away from that discussion.
In retrospect, that isn’t really an example; working out that motive and stating it explicitly was a significant fraction of the effort involved in this comment.
Jonah has recently been attempting to persuade Eliezer that Eliezer’s comparative advantage is not the FAI research that he is currently doing but instead doing (more) evangelism. Now we have a post explaining how status signalling related motivated cognition can cause people to overestimate the value of the altruistic efforts that they happen to personally have chosen. This is almost certainly true—typical human biases work like that in all similar areas so it would be startling to find that an activity so heavily evolutionarily entangled with signalling motives was somehow immune! I feel it is important, however, to at least make passing acknowledgement of the fact that this exhortation about motivated cognition is itself subject to motive.
Jonah himself acknowledges people are more likely to suggest motivated cognition as something that the other guy might be suffering from than to apply it to themselves. While in this case there is no overt claim like ”… and therefore you should believe the guy I was arguing with is biased and so agree with me instead” and I don’t believe Jonah intends anything so crude, the recent context does change the meaning of any given post—at least the context and expected social influence of a post influences how I personally evaluate contributions that I encounter and I currently do not label that habit of reading a bug.
To be clear the pattern “significant argument --(short time)--> post by one participant which points out a bias that the other participant may have” isn’t (always) a cause to reject the post. This one isn’t particularly objectionable (a tad obvious but that’s ok in discussion). Nevertheless I suggest that for the purpose of making the actual explicit point without distraction it may usually be best to keep such posts in draft form for a couple of weeks and post them later when the context loses relevance. Either that or include a lampshade or disclaimer regarding the relevance to the existing conversation. There is something about acting oblivious that invites scepticism.
In writing my post, I had a number of different examples in the back of my mind.
Even if I don’t think that MIRI’s current Friendly AI research is of high value, I believe that there are instances in which people have undervalued Eliezer’s holistic output for the reason that I describe in my post.
There’s a broader context that my post falls into: note that I’ve made 11 substantive posts over the past 2.5 weeks, about subjects ranging from GiveWell’s on climate change and meta-research, to effective philanthropy in general, to epistemology.
You may be right that I should be spacing my posts out in a different way, temporally.
I endorse the lampshade approach significantly more than the delay approach.
More generally, I endorse stating explicitly whatever motivational or cognitive biases may nonobviously be influencing my posting whenever doing so isn’t a significant fraction of the effort involved in the post.
For example, right now I suspect I’m being motivated by interpreting wedrifid’s comment as a relatively sophisticated way of taking sides in the Jonah/Eliezer discussion he references, and because power struggles make me anxious my instinct is to “go meta” and abstract this issue further away from that discussion.
In retrospect, that isn’t really an example; working out that motive and stating it explicitly was a significant fraction of the effort involved in this comment.