How are you going to realize if you have suggested a trick that doesn’t work, or that only works for some people, if you accept all anecdotal success stories as confirming its effectiveness, but reject all reports of failure because people just make excuses?
I can only assume you’re implying that that’s what I do. But as I’ve already stated, when someone has performed a technique to my satisfaction, and it still doesn’t work, I have them try something else. I don’t just say, “oh well, tough luck, and it’s your fault”.
There are only a few possibilities regarding an explanation of why “different things work for different people”:
Some things only work on some people, and this is an unchanging trait attributable to the people themselves,
Some things only work on certain kinds of problems, and many problems superficially sound similar but are actually different in their mechanism of operation (so that technique A works on problem A1 but not A2, and the testers/experimenters have not yet discerned the difference between A1 and A2), and
Some people have an easier time of learning how to do some things than others, depending in part on how the thing is explained, and what prior beliefs, understandings, etc. they have. (So that even though a test of technique A is being performed, in practice one is testing an unknown set of variant techniques A1, A2,...)
On LW, #1 is a popular explanation, but I have seen much more evidence that makes sense for #2 and #3. (For example, not being able to apply a technique and then later learning it supports #3, and discovering a criterion that predicts which of two techniques will be more likely to work for a given problem supports #2.)
Of course, I cannot 100% rule out the possibility that #1 could be true, but it seems like pretty long odds to me. There are so many clear-cut cases of #2 and #3, that barring actual brain damage or defect, #1 seems like adding unnecessary entities to one’s model, without any theoretical or empirical justification whatsoever.
More than that, it sounds exactly like attribution error, and an instance of Dweck’s “fixed” mindset as well. In other words, we can expect belief in #1 to be associated with a mindset that is highly correlated with consistent difficulty and stress in the corresponding field.
That’s why I consider view #1 to be bad instrumental hygiene as well as not that likely to be true anyway. It’s a horrible negative self-prime to saddle yourself with.
I can only assume you’re implying that that’s what I do. But as I’ve already stated, when someone has performed a technique to my satisfaction, and it still doesn’t work, I have them try something else. I don’t just say, “oh well, tough luck, and it’s your fault”.
There are only a few possibilities regarding an explanation of why “different things work for different people”:
Some things only work on some people, and this is an unchanging trait attributable to the people themselves,
Some things only work on certain kinds of problems, and many problems superficially sound similar but are actually different in their mechanism of operation (so that technique A works on problem A1 but not A2, and the testers/experimenters have not yet discerned the difference between A1 and A2), and
Some people have an easier time of learning how to do some things than others, depending in part on how the thing is explained, and what prior beliefs, understandings, etc. they have. (So that even though a test of technique A is being performed, in practice one is testing an unknown set of variant techniques A1, A2,...)
On LW, #1 is a popular explanation, but I have seen much more evidence that makes sense for #2 and #3. (For example, not being able to apply a technique and then later learning it supports #3, and discovering a criterion that predicts which of two techniques will be more likely to work for a given problem supports #2.)
Of course, I cannot 100% rule out the possibility that #1 could be true, but it seems like pretty long odds to me. There are so many clear-cut cases of #2 and #3, that barring actual brain damage or defect, #1 seems like adding unnecessary entities to one’s model, without any theoretical or empirical justification whatsoever.
More than that, it sounds exactly like attribution error, and an instance of Dweck’s “fixed” mindset as well. In other words, we can expect belief in #1 to be associated with a mindset that is highly correlated with consistent difficulty and stress in the corresponding field.
That’s why I consider view #1 to be bad instrumental hygiene as well as not that likely to be true anyway. It’s a horrible negative self-prime to saddle yourself with.