Given our difference on opinions, I think we managed to conduct this dialogue with a fair amount of decorum. However, I don’t we are going to have any agreement. I have to go with the science.
You give any group of people a perfectionism or fear of failure test along with almost any procrastination scale and you get pretty much anywhere from a negative to at best a very weak positive correlation. And if you control for self-efficacy or self-confidence, that weak correlation disappears. Science does not back you up.
Similarly, characterizing impulsiveness as a fudge factor, well that is just being silly. A simple Google Scholar search will show over 45,000 citations on the term, including the ground breaking work by George Ainslie. It really is a measure of system 1 heavy decision making, something that you yourself accept. In fact, there is enough science on it that I’m conducting a meta-analytic review. And, unlike fear of failure, you find a very strong correlation between impulsiveness and procrastination.
Now characterizing every technique that science has produced as not up to your standards is a little harsh. The book is a review of the literature. Essentially, researchers in peer-reviewed studies have conducted a variety of treatments, like stimulus control (which activates the cue sensitive system 1), and found them very effective at reducing procrastination. I organize and report what works. Since there is a thousands ways to implement stimlus control, you can describe the general methodology, report its effectiveness and give a few examples of how it can be used. If you know a better way to convery this information, I’m all ears. Of note, this is indeed an environmental fix to procrastination, one of several and not what you characterize as “don’t think that way or think something else.” Again, you come across as not having read the book.
On the other hand, I think you have been given pretty much a free ride up to this point. You make a lot of suggestions that are inconsistent with our present knowledge of the field (e.g., fear of failure). You make a quite bold claim that you have techniques that with one application will cure procrastinators, presumably by focusing solely on the expectancy or self-efficacy aspect of motivation. We can all make claims. Show me some peer-reviewed research (please, not clincial case studies).
On the longshot you might be right and have all the magic bullets, do some experimental research on it and publish it in a respectable journal. I would welcome the correction. I have a lot of research interests and would be happy to be able to focus on other things. Personally, I don’t think you actually are going to do it. Right now, you have the warm belief that the rest of us studying this field are effectively a bunch of second rates as “science has not actually caught up to the in-field knowledge of people like myself.” If you actually do the research (with proper controls, like accounting for the placebo effect which runs rampant through self-efficacy type clinical interventions), you run the risk of having a very self-satisfying set of beliefs turned into flimsy illusions. Do you really think you are willing to take that risk? Given human nature, I’m sceptical but would love to be proven wrong.
You give any group of people a perfectionism or fear of failure test along with almost any procrastination scale and you get pretty much anywhere from a negative to at best a very weak positive correlation. And if you control for self-efficacy or self-confidence, that weak correlation disappears. Science does not back you up.
The above made me think of a paragraph that caught my eye while I was skimming through Robert Boice’s Procrastination and Blocking: A Novel, Practical Approach:
Second, [Procrastination and Blocking] seems hard to define and study. Its practical understanding will require direct observation of PBers acting as problematically dilatory and self-conscious individuals. As a rule, psychologists avoid the time and inconvenience of lengthy field studies. Instead, they prefer to draw occasional conclusions about PBing based on quick personality tests administered to college freshmen. In that way, they can feel like scientists, testing students in laboratory conditions and linking the results, statistically, to other test outcomes such as the seeming inclination of PBers to admit perfectionism or demanding parents (Ferrari and Olivette, 1994). A problem is that researchers lose sight of PBing as a real, costly, and treatable problem.
(Note: This was just an association I made. I haven’t read your book and I don’t mean to imply that you belong to the category of researchers described by Boice.)
Robert Boice’s Procrastination and Blocking: A Novel, Practical Approach
Interesting. I skimmed the introduction and it sounds like he’s writing about the kind of procrastination I mean when I say “procrastination”. Looks potentially worth a read; thanks for the tip.
Show me some peer-reviewed research (please, not clinical case studies).
This seems like an unreasonable thing to ask of a non-academic. Based on what I hear of academia, pjeby doesn’t have a good chance of obtaining funding for a controlled study nor of publishing his results in a respectable journal even if they are as good as he claims. Or am I wrong? It would be nice if I were incorrect on either of those things.
You are probably right. It was an overly onerous requirement on my part. However, peer-reviewed is our best stamp of quality research we have and a meta-analysis is even better, comprised of hundreds of peer-reviewed research. I am passionate about science, well aware of the limitations of clincial expert opinion, and was probably too strident.
In truth, it is almost impossible for a sole practitioner to discern whether the efficaciousness of their treatment is due to the treatment itself or other apparently non-relevant aspects, such as the placebo effect or the personality of the clinician. There are some really effective clinicians out there who are successful through their innate ability to inspire. You need to do or rely on research to determine what is really going on (i.e., evidence based treatment). There really isn’t any other way (really, really, really), and unless he gets this, there is nothing he will personally experience that will make him change his mind. This isn’t new though. Research has repeated shown statistical analysis beats clinical opinion pretty much everytime (here’s one from Paul Meehl, who I studied under and was both a clinician and statistican:
http://www.psych.umn.edu/faculty/grove/114meehlscontributiontoclinical.pdf).
This type of issue is never going go away though. We have everything from homeopathy to applied kinesiology, all of which where appears to work because people believe it works. The only way to separate out whether the motivational treatment is inherently effective is through research. If it is the placebo effect and you are happy with that being the source of whatever change you are seeing, then add a lot more pomp and ceremony—it ups the effect.
There are some really effective clinicians out there who are successful through their innate ability to inspire.
Heh. Doesn’t apply in my case, unless mere text on a screen qualifies as innate ability to inspire. (Most of my client work is done in text format, and I mostly try to teach people techniques which they can apply themselves.)
Really, if these clinicians are successful for this reason, then why isn’t there any research identifiying what this “innate ability” consists of, so that other clinicians can be taught to be inspiring, or conversely, there can be some sort of inspirational ability test made a qualification of licensing?
A phrase like “innate abiliity to inspire” is bad science and bad reductionism.
You need to do or rely on research to determine what is really going on (i.e., evidence based treatment).
Ah, that’s why auto mechanics have peer-reviewed journals in order to notice whether they can really fix cars, or just have an innate ability to inspire the cars. ;-)
Can a mechanic be wrong about why a car started working, or how it was broken? Absolutely. Does it matter to the mechanic? To the car’s owner? Not very much.
I wrote a response to your post above, but the site sits and spins for several minutes every time I submit it; I guess perhaps it’s too long. I referred back to various other postings on this site, so you could get an idea of how strict LessWrong’s standards of reductionism and word usage actually are, and showing why individual falsifiability is a higher standard than peer-reviewed research, if you want a car that starts.
The type of research-based advice you’re touting, doesn’t rise to the level of individual falsifiability, because you can still say “it’s proven science” even when it doesn’t work for that particular individual. I don’t have that retreat, because I only accept as “technique” processes which can be unequivocally stated as having worked or not worked for a particular application.
My longer post also detailed the likely areas where placebo effects could exist in my work, and described some of the difficulties in formulating an appropriate control placebo for same.
So, I do understand the difference between chemistry and auto mechanics, and I’m not claiming to be a good chemist, or that you’re a bad one. But I am saying that chemists who haven’t actually opened the hood and gotten their hands dirty might not be the best people to write owners’ manuals, even if they might be a good technical reviewer for such a manual.
Conversely, auto mechanics shouldn’t write chemistry textbooks, and I don’t have any delusions in that regard.
(Hopefully, this comment is short enough to actually get posted.)
Given our difference on opinions, I think we managed to conduct this dialogue with a fair amount of decorum. However, I don’t we are going to have any agreement. I have to go with the science.
You give any group of people a perfectionism or fear of failure test along with almost any procrastination scale and you get pretty much anywhere from a negative to at best a very weak positive correlation. And if you control for self-efficacy or self-confidence, that weak correlation disappears. Science does not back you up.
Similarly, characterizing impulsiveness as a fudge factor, well that is just being silly. A simple Google Scholar search will show over 45,000 citations on the term, including the ground breaking work by George Ainslie. It really is a measure of system 1 heavy decision making, something that you yourself accept. In fact, there is enough science on it that I’m conducting a meta-analytic review. And, unlike fear of failure, you find a very strong correlation between impulsiveness and procrastination.
Now characterizing every technique that science has produced as not up to your standards is a little harsh. The book is a review of the literature. Essentially, researchers in peer-reviewed studies have conducted a variety of treatments, like stimulus control (which activates the cue sensitive system 1), and found them very effective at reducing procrastination. I organize and report what works. Since there is a thousands ways to implement stimlus control, you can describe the general methodology, report its effectiveness and give a few examples of how it can be used. If you know a better way to convery this information, I’m all ears. Of note, this is indeed an environmental fix to procrastination, one of several and not what you characterize as “don’t think that way or think something else.” Again, you come across as not having read the book.
On the other hand, I think you have been given pretty much a free ride up to this point. You make a lot of suggestions that are inconsistent with our present knowledge of the field (e.g., fear of failure). You make a quite bold claim that you have techniques that with one application will cure procrastinators, presumably by focusing solely on the expectancy or self-efficacy aspect of motivation. We can all make claims. Show me some peer-reviewed research (please, not clincial case studies).
On the longshot you might be right and have all the magic bullets, do some experimental research on it and publish it in a respectable journal. I would welcome the correction. I have a lot of research interests and would be happy to be able to focus on other things. Personally, I don’t think you actually are going to do it. Right now, you have the warm belief that the rest of us studying this field are effectively a bunch of second rates as “science has not actually caught up to the in-field knowledge of people like myself.” If you actually do the research (with proper controls, like accounting for the placebo effect which runs rampant through self-efficacy type clinical interventions), you run the risk of having a very self-satisfying set of beliefs turned into flimsy illusions. Do you really think you are willing to take that risk? Given human nature, I’m sceptical but would love to be proven wrong.
The above made me think of a paragraph that caught my eye while I was skimming through Robert Boice’s Procrastination and Blocking: A Novel, Practical Approach:
(Note: This was just an association I made. I haven’t read your book and I don’t mean to imply that you belong to the category of researchers described by Boice.)
Interesting. I skimmed the introduction and it sounds like he’s writing about the kind of procrastination I mean when I say “procrastination”. Looks potentially worth a read; thanks for the tip.
This seems like an unreasonable thing to ask of a non-academic. Based on what I hear of academia, pjeby doesn’t have a good chance of obtaining funding for a controlled study nor of publishing his results in a respectable journal even if they are as good as he claims. Or am I wrong? It would be nice if I were incorrect on either of those things.
You are probably right. It was an overly onerous requirement on my part. However, peer-reviewed is our best stamp of quality research we have and a meta-analysis is even better, comprised of hundreds of peer-reviewed research. I am passionate about science, well aware of the limitations of clincial expert opinion, and was probably too strident.
In truth, it is almost impossible for a sole practitioner to discern whether the efficaciousness of their treatment is due to the treatment itself or other apparently non-relevant aspects, such as the placebo effect or the personality of the clinician. There are some really effective clinicians out there who are successful through their innate ability to inspire. You need to do or rely on research to determine what is really going on (i.e., evidence based treatment). There really isn’t any other way (really, really, really), and unless he gets this, there is nothing he will personally experience that will make him change his mind. This isn’t new though. Research has repeated shown statistical analysis beats clinical opinion pretty much everytime (here’s one from Paul Meehl, who I studied under and was both a clinician and statistican: http://www.psych.umn.edu/faculty/grove/114meehlscontributiontoclinical.pdf).
This type of issue is never going go away though. We have everything from homeopathy to applied kinesiology, all of which where appears to work because people believe it works. The only way to separate out whether the motivational treatment is inherently effective is through research. If it is the placebo effect and you are happy with that being the source of whatever change you are seeing, then add a lot more pomp and ceremony—it ups the effect.
Heh. Doesn’t apply in my case, unless mere text on a screen qualifies as innate ability to inspire. (Most of my client work is done in text format, and I mostly try to teach people techniques which they can apply themselves.)
Really, if these clinicians are successful for this reason, then why isn’t there any research identifiying what this “innate ability” consists of, so that other clinicians can be taught to be inspiring, or conversely, there can be some sort of inspirational ability test made a qualification of licensing?
A phrase like “innate abiliity to inspire” is bad science and bad reductionism.
Ah, that’s why auto mechanics have peer-reviewed journals in order to notice whether they can really fix cars, or just have an innate ability to inspire the cars. ;-)
Can a mechanic be wrong about why a car started working, or how it was broken? Absolutely. Does it matter to the mechanic? To the car’s owner? Not very much.
I wrote a response to your post above, but the site sits and spins for several minutes every time I submit it; I guess perhaps it’s too long. I referred back to various other postings on this site, so you could get an idea of how strict LessWrong’s standards of reductionism and word usage actually are, and showing why individual falsifiability is a higher standard than peer-reviewed research, if you want a car that starts.
The type of research-based advice you’re touting, doesn’t rise to the level of individual falsifiability, because you can still say “it’s proven science” even when it doesn’t work for that particular individual. I don’t have that retreat, because I only accept as “technique” processes which can be unequivocally stated as having worked or not worked for a particular application.
My longer post also detailed the likely areas where placebo effects could exist in my work, and described some of the difficulties in formulating an appropriate control placebo for same.
So, I do understand the difference between chemistry and auto mechanics, and I’m not claiming to be a good chemist, or that you’re a bad one. But I am saying that chemists who haven’t actually opened the hood and gotten their hands dirty might not be the best people to write owners’ manuals, even if they might be a good technical reviewer for such a manual.
Conversely, auto mechanics shouldn’t write chemistry textbooks, and I don’t have any delusions in that regard.
(Hopefully, this comment is short enough to actually get posted.)