The question is, how good are people at introspection: what if the strategies they report are not the strategies they actually use? For example, because they omit the parts that seem unimportant, but that actually make the difference. (Maybe positive or negative thinking is irrelevant, but imagining blue things is crucial.)
Or what if “the thing that brings success” causes the narrative of the cognitive strategy, but merely changing the cognitive strategy will not cause “the thing that brings success”? (People imagining blue things will be driven to succeed in love, and also to think a lot about fluffy kittens. However, thinking about fluffy kittens will not make you imagine blue things, and therefore will not bring you success in love. Even if all people successful in love report thinking about fluffy kittens a lot.)
I think its’ probably likely that gaining knowledge in this way will have systematic biases (OK, this is probably true of all types of knowledge acquisition strategies, but you pointed out some good ones for this particular knowledge gathering technique.)
Anyways, based on my own research (and practical experience over the past few months doing this sort of modelling for people with/without procrastination issues) here are some of the things you can do to reduce the bias:
Try to inner sim using the strategy yourself and see if it works.
Model multiple people, and find the strategies that seem to be commonalities.
Check for congruence with people as they’re talking. Use common indicators of cached answers like instant answers or lack of emotional charge.
Make sure people are embodied in a particular experience as they discuss, rather than trying to “figure themselves out” from the outside.
Use introspection tools from a variety of disciplines like thinking at the edge, coherence therapy, etc that can allow people to get better access to internal models.
All that being said, there will still be bias, but I think with these techniques there’s not SO much bias that its’ a useless endeavor.
Sounds interesting!
The question is, how good are people at introspection: what if the strategies they report are not the strategies they actually use? For example, because they omit the parts that seem unimportant, but that actually make the difference. (Maybe positive or negative thinking is irrelevant, but imagining blue things is crucial.)
Or what if “the thing that brings success” causes the narrative of the cognitive strategy, but merely changing the cognitive strategy will not cause “the thing that brings success”? (People imagining blue things will be driven to succeed in love, and also to think a lot about fluffy kittens. However, thinking about fluffy kittens will not make you imagine blue things, and therefore will not bring you success in love. Even if all people successful in love report thinking about fluffy kittens a lot.)
I think its’ probably likely that gaining knowledge in this way will have systematic biases (OK, this is probably true of all types of knowledge acquisition strategies, but you pointed out some good ones for this particular knowledge gathering technique.)
Anyways, based on my own research (and practical experience over the past few months doing this sort of modelling for people with/without procrastination issues) here are some of the things you can do to reduce the bias:
Try to inner sim using the strategy yourself and see if it works.
Model multiple people, and find the strategies that seem to be commonalities.
Check for congruence with people as they’re talking. Use common indicators of cached answers like instant answers or lack of emotional charge.
Make sure people are embodied in a particular experience as they discuss, rather than trying to “figure themselves out” from the outside.
Use introspection tools from a variety of disciplines like thinking at the edge, coherence therapy, etc that can allow people to get better access to internal models.
All that being said, there will still be bias, but I think with these techniques there’s not SO much bias that its’ a useless endeavor.