Your point that most self-help books are not peer-review backed is very true, and something more people should be aware of. OTOH, I did trials with a number of self-help books, and how useful they were to me and my friends did not correlate with scientific rigor. https://acesounderglass.com/2018/04/14/self-help-epistemic-spot-check-results/ . Given that, I think it makes sense to put more emphasis on individual testability.
This is a fair point, and I find it interesting that you did this and came to the conclusion “nothing short of trying it seems to have any predictive ability of whether or not it is helpful”. When you hand somebody a self-helf book, it is hard to say what the exact treatment should be that will be tested. If the treatment is just handing out the book, then it may be that bad writing keeps the treatment effects low. If the treatment is applying all techniques, then the test for some of the books will be very expensive, and the author can make up excuses why something does not work.
A topic I considered to cover but postponed is cognitive dissonance; if people have invested time into following a technique, they want to find it useful afterwards. Basically, people then apply the same biases that make them prey to cold reading and other con-artist techniques: They attribute positive developments to having read the book, and ignore counter-evidence. Some of the self-help authors have a large cult around them, probably starting with Napoleon Hill. Cognitive dissonance also applies to self-help communities without a guru at the center, because fans mutually reinforce their beliefs. Eye training may be an example.
But of course not everything under the book-shelve label “self-help” is like that. The following things should make you very suspicious:
- The author writes in a very obscure style, and it is hard to grasp what he is talking about.
- The author claims to have discovered a magical secret, as opposed to either scientifically-based advice or commonsense observations.
- You decide to apply the techniques, do it for a while, but nothing happens (of which it is not more likely that it has nothing to do with the book).
- The techniques are basically not applicable.
- The techniques depend strongly on belief (If you don’t believe it, it doesn’t work.)
The epistemic problem here is that many of these things can “work” on some level, like a mental placebo. But they work because e.g. you have decided to change your life, become more confident etc. and not because of some magic the author describes. Maybe you just feel better because you took time to read a book instead of just sitting there worrying. Maybe you imagined yourself in a better state and that made you feel better. I find it hard even to define a placebo for that. In all these cases, the willingness to start is more important than the content of the book, which of course poses a testability problem for the content. (I believe this is also a problem for the testability of recognized psychotherapies.)
Your point that most self-help books are not peer-review backed is very true, and something more people should be aware of. OTOH, I did trials with a number of self-help books, and how useful they were to me and my friends did not correlate with scientific rigor. https://acesounderglass.com/2018/04/14/self-help-epistemic-spot-check-results/ . Given that, I think it makes sense to put more emphasis on individual testability.
This is a fair point, and I find it interesting that you did this and came to the conclusion “nothing short of trying it seems to have any predictive ability of whether or not it is helpful”. When you hand somebody a self-helf book, it is hard to say what the exact treatment should be that will be tested. If the treatment is just handing out the book, then it may be that bad writing keeps the treatment effects low. If the treatment is applying all techniques, then the test for some of the books will be very expensive, and the author can make up excuses why something does not work.
A topic I considered to cover but postponed is cognitive dissonance; if people have invested time into following a technique, they want to find it useful afterwards. Basically, people then apply the same biases that make them prey to cold reading and other con-artist techniques: They attribute positive developments to having read the book, and ignore counter-evidence. Some of the self-help authors have a large cult around them, probably starting with Napoleon Hill. Cognitive dissonance also applies to self-help communities without a guru at the center, because fans mutually reinforce their beliefs. Eye training may be an example.
But of course not everything under the book-shelve label “self-help” is like that. The following things should make you very suspicious:
- The author writes in a very obscure style, and it is hard to grasp what he is talking about.
- The author claims to have discovered a magical secret, as opposed to either scientifically-based advice or commonsense observations.
- You decide to apply the techniques, do it for a while, but nothing happens (of which it is not more likely that it has nothing to do with the book).
- The techniques are basically not applicable.
- The techniques depend strongly on belief (If you don’t believe it, it doesn’t work.)
The epistemic problem here is that many of these things can “work” on some level, like a mental placebo. But they work because e.g. you have decided to change your life, become more confident etc. and not because of some magic the author describes. Maybe you just feel better because you took time to read a book instead of just sitting there worrying. Maybe you imagined yourself in a better state and that made you feel better. I find it hard even to define a placebo for that. In all these cases, the willingness to start is more important than the content of the book, which of course poses a testability problem for the content. (I believe this is also a problem for the testability of recognized psychotherapies.)