This is interesting. Actually, you are quite right in that TMT is an overall integrative model. It was actually designed to be a Roseatta stone, allowing us to draw findings and applications from different fields into a coherent whole. It was at one level of detail and has it uses, just as a map of the city is useful but not equivalent to a blueprint of a house (though neither are wrong). For example, it excluded nonsense solutions, which the field is rife with.
You have a naturally critical mind, which is useful, but you are taking a few short cognitive shortcuts. By what you write, it doesn’t seem like you actually read the book or the article. The article formally integrates prospect theory, under the section CPT. CPT is actually the next update to prospect theory by Kahneman and Tversky, see pages 894-895 (e.g., “Consequently, other researchers have already proposed various integrations of prospect theory with some hyperbolic time-discounting function”). Chapter three of the book is an extended review of system one and system two, including a historical review of it going back to Plato. The last three chapters then, using TMT as an organizing model, reviews all the applied science on this, ones that has been successfully used to increase self-regulaton.
What would be useful is this. What precise techniques do you take issue with? Are there any you think ineffective or too vague to be applied? Though everything was already scientifically vetted, maybe I could have been clearer in sections. Given other feedback, I found that many people needed a better walkthrough of how to apply these techniques. In the paperback version, I added a step-by-step guide. So is it the techniques or the explanation?
Alternatively, you might have some insight into specific techniques that the book neglected. This is quite possible as I didn’t want to include less developed techniques, ones without proven value. Developing a full package of self-regulatory techniques is exactly where science needs to go and why what Lesswrong is doing is quite remarkable. We don’t have this. Instead, the area of motivation is splintered into competing theories and practices, other redundant to one another or simply isolated. What we get marketed to us from the self-help arena is often out of date or even wrong. Aside from Lesswrong, I don’t know of another concerted effort to change this.
Think of the book as version 1.0. What do you want in the next upgrade? You like the basic model, which is a start. It can help direct people towards broad areas of weakness (e.g., the diagnostic test in the book; which notably accounts for about 70% of the variance in people’s procrastination scores). Then, we have a series of techniques to address these weaknesses, outlined in chapters 7, 8, and 9. What’s next? Can we expand on them? Can we refine or improve their implementation? Can we express them in ways that helps people adopt them? Can we combine them into something more powerful? These are questions worth asking.
To some extent, I can contribute to Lesswrong on a positive venture like this. It is serious, useful and noble.
The article formally integrates prospect theory, under the section CPT. CPT is actually the next update to prospect theory by Kahneman and Tversky, see pages 894-895 (e.g., “Consequently, other researchers have already proposed various integrations of prospect theory with some hyperbolic time-discounting function”).
Yes, I’m aware of that. I was pointing out that the additional complication of hyperbolic discounting isn’t necessary; in helping dozens of people work through procrastination difficulties, and myself through many more dozens of specific instances, hyperbolic discounting hasn’t been particularly relevant to the process. Frankly, it’s never come up. In virtually all cases, any discounting effects have been dominated by more fundamental factors like negative value perceptions, and getting rid of those perceptions means the discount on the positive value is irrelevant.
(Note that plain old prospect theory is enough to predict this: if losses count double relative to gains, you get bigger wins by reducing losses than you do increasing expected value gains.)
What precise techniques do you take issue with?
I don’t recall seeing anything in The Procrastination Equation that qualified in my mind as a “technique”; it looked more like “advice” to me, and I try not to deal in advice, if I can avoid it.
The distinction for me is that a technique would involve cognitive steps that would repeatably bring about a change in behavior, without requiring the steps themselves to be repeated for that particular instance of procrastination. (Or if some repetition were required, it should be an extremely simple technique!)
To my recollection, there was nothing in the book that claimed to be such, or provided claims of better results, repeatability, ease-of-training, or ease-of-use than techniques I already used or taught. That’s the criterion I use when reading self-help materials: if a technique or method isn’t claimed to be at least as good as something I’ve already tested and found useful, I don’t bother testing it.
Generally speaking, the absence of sufficiently-specific mental steps and the absence of a claim of repeatability means there’s no “technique” there, in the sense of “here are the steps to break down and clean a model 36X carburetor”. There’s just “advice” as in, “you might want to check the carburetor if your car isn’t starting”. It was this latter type of advice that I recall having found in TPE; if there was an actual technique in the book, it was quite well-hidden.
Think of the book as version 1.0. What do you want in the next upgrade?
Er, nothing? ;-) I don’t care about the book. I guess from the hints you’re dropping that you’re the author? I’m not interested in having an improved set of techniques in the book, unless they claim greater ease or effectiveness along the criteria I mentioned above. I have and teach plenty of techniques that work quite well.
What my comment was saying is simply that science has not actually caught up to the in-field knowledge of people like myself who actually fix people’s procrastination. When I read books on procrastination, I use them to harvest the knowledge of other practitioners, and of course knowing about the science is nice if it leads to new ideas for practical techniques.
The reason I said your book was rubbish from a practical perspective is because it contained nothing I wasn’t teaching people in 2006, except an added fudge factor called “impulsivity”. And it ignores virtually every piece of brain mechanics that’s actually involved in fixing the types of chronic procrastination problems I help people with, such as fear of failure, stereotype threats, mis-set expectations, “should” beliefs, and so on.
Again, I could be in error on this point, it’s been a long time since I read the book, but I seem to recall it basically offered advice at the level of, “don’t think that way” or “think something else”. And in my experience, that detail level is useless for teaching someone to actually think in a particular way that resolves a problem.
It sounds to me like our goals differ in any case; note for example:
ones that has been successfully used to increase self-regulaton.
If I understand this statement correctly, our goals are actually opposed: I do not want to increase anybody’s self-regulation; I want them to naturally do the right thing, without any conscious self-regulation required. A technique I use or teach has to have the effect of altering ongoing motivation with respect to a task, preferably after a single application of the technique, and without requiring someone to change their environment or alter their incentives externally. (e.g. rewards, environment changes, etc.)
Did your book even claim to offer anything like that? If so, I missed it.
Given our difference on opinions, I think we managed to conduct this dialogue with a fair amount of decorum. However, I don’t we are going to have any agreement. I have to go with the science.
You give any group of people a perfectionism or fear of failure test along with almost any procrastination scale and you get pretty much anywhere from a negative to at best a very weak positive correlation. And if you control for self-efficacy or self-confidence, that weak correlation disappears. Science does not back you up.
Similarly, characterizing impulsiveness as a fudge factor, well that is just being silly. A simple Google Scholar search will show over 45,000 citations on the term, including the ground breaking work by George Ainslie. It really is a measure of system 1 heavy decision making, something that you yourself accept. In fact, there is enough science on it that I’m conducting a meta-analytic review. And, unlike fear of failure, you find a very strong correlation between impulsiveness and procrastination.
Now characterizing every technique that science has produced as not up to your standards is a little harsh. The book is a review of the literature. Essentially, researchers in peer-reviewed studies have conducted a variety of treatments, like stimulus control (which activates the cue sensitive system 1), and found them very effective at reducing procrastination. I organize and report what works. Since there is a thousands ways to implement stimlus control, you can describe the general methodology, report its effectiveness and give a few examples of how it can be used. If you know a better way to convery this information, I’m all ears. Of note, this is indeed an environmental fix to procrastination, one of several and not what you characterize as “don’t think that way or think something else.” Again, you come across as not having read the book.
On the other hand, I think you have been given pretty much a free ride up to this point. You make a lot of suggestions that are inconsistent with our present knowledge of the field (e.g., fear of failure). You make a quite bold claim that you have techniques that with one application will cure procrastinators, presumably by focusing solely on the expectancy or self-efficacy aspect of motivation. We can all make claims. Show me some peer-reviewed research (please, not clincial case studies).
On the longshot you might be right and have all the magic bullets, do some experimental research on it and publish it in a respectable journal. I would welcome the correction. I have a lot of research interests and would be happy to be able to focus on other things. Personally, I don’t think you actually are going to do it. Right now, you have the warm belief that the rest of us studying this field are effectively a bunch of second rates as “science has not actually caught up to the in-field knowledge of people like myself.” If you actually do the research (with proper controls, like accounting for the placebo effect which runs rampant through self-efficacy type clinical interventions), you run the risk of having a very self-satisfying set of beliefs turned into flimsy illusions. Do you really think you are willing to take that risk? Given human nature, I’m sceptical but would love to be proven wrong.
You give any group of people a perfectionism or fear of failure test along with almost any procrastination scale and you get pretty much anywhere from a negative to at best a very weak positive correlation. And if you control for self-efficacy or self-confidence, that weak correlation disappears. Science does not back you up.
The above made me think of a paragraph that caught my eye while I was skimming through Robert Boice’s Procrastination and Blocking: A Novel, Practical Approach:
Second, [Procrastination and Blocking] seems hard to define and study. Its practical understanding will require direct observation of PBers acting as problematically dilatory and self-conscious individuals. As a rule, psychologists avoid the time and inconvenience of lengthy field studies. Instead, they prefer to draw occasional conclusions about PBing based on quick personality tests administered to college freshmen. In that way, they can feel like scientists, testing students in laboratory conditions and linking the results, statistically, to other test outcomes such as the seeming inclination of PBers to admit perfectionism or demanding parents (Ferrari and Olivette, 1994). A problem is that researchers lose sight of PBing as a real, costly, and treatable problem.
(Note: This was just an association I made. I haven’t read your book and I don’t mean to imply that you belong to the category of researchers described by Boice.)
Robert Boice’s Procrastination and Blocking: A Novel, Practical Approach
Interesting. I skimmed the introduction and it sounds like he’s writing about the kind of procrastination I mean when I say “procrastination”. Looks potentially worth a read; thanks for the tip.
Show me some peer-reviewed research (please, not clinical case studies).
This seems like an unreasonable thing to ask of a non-academic. Based on what I hear of academia, pjeby doesn’t have a good chance of obtaining funding for a controlled study nor of publishing his results in a respectable journal even if they are as good as he claims. Or am I wrong? It would be nice if I were incorrect on either of those things.
You are probably right. It was an overly onerous requirement on my part. However, peer-reviewed is our best stamp of quality research we have and a meta-analysis is even better, comprised of hundreds of peer-reviewed research. I am passionate about science, well aware of the limitations of clincial expert opinion, and was probably too strident.
In truth, it is almost impossible for a sole practitioner to discern whether the efficaciousness of their treatment is due to the treatment itself or other apparently non-relevant aspects, such as the placebo effect or the personality of the clinician. There are some really effective clinicians out there who are successful through their innate ability to inspire. You need to do or rely on research to determine what is really going on (i.e., evidence based treatment). There really isn’t any other way (really, really, really), and unless he gets this, there is nothing he will personally experience that will make him change his mind. This isn’t new though. Research has repeated shown statistical analysis beats clinical opinion pretty much everytime (here’s one from Paul Meehl, who I studied under and was both a clinician and statistican:
http://www.psych.umn.edu/faculty/grove/114meehlscontributiontoclinical.pdf).
This type of issue is never going go away though. We have everything from homeopathy to applied kinesiology, all of which where appears to work because people believe it works. The only way to separate out whether the motivational treatment is inherently effective is through research. If it is the placebo effect and you are happy with that being the source of whatever change you are seeing, then add a lot more pomp and ceremony—it ups the effect.
There are some really effective clinicians out there who are successful through their innate ability to inspire.
Heh. Doesn’t apply in my case, unless mere text on a screen qualifies as innate ability to inspire. (Most of my client work is done in text format, and I mostly try to teach people techniques which they can apply themselves.)
Really, if these clinicians are successful for this reason, then why isn’t there any research identifiying what this “innate ability” consists of, so that other clinicians can be taught to be inspiring, or conversely, there can be some sort of inspirational ability test made a qualification of licensing?
A phrase like “innate abiliity to inspire” is bad science and bad reductionism.
You need to do or rely on research to determine what is really going on (i.e., evidence based treatment).
Ah, that’s why auto mechanics have peer-reviewed journals in order to notice whether they can really fix cars, or just have an innate ability to inspire the cars. ;-)
Can a mechanic be wrong about why a car started working, or how it was broken? Absolutely. Does it matter to the mechanic? To the car’s owner? Not very much.
I wrote a response to your post above, but the site sits and spins for several minutes every time I submit it; I guess perhaps it’s too long. I referred back to various other postings on this site, so you could get an idea of how strict LessWrong’s standards of reductionism and word usage actually are, and showing why individual falsifiability is a higher standard than peer-reviewed research, if you want a car that starts.
The type of research-based advice you’re touting, doesn’t rise to the level of individual falsifiability, because you can still say “it’s proven science” even when it doesn’t work for that particular individual. I don’t have that retreat, because I only accept as “technique” processes which can be unequivocally stated as having worked or not worked for a particular application.
My longer post also detailed the likely areas where placebo effects could exist in my work, and described some of the difficulties in formulating an appropriate control placebo for same.
So, I do understand the difference between chemistry and auto mechanics, and I’m not claiming to be a good chemist, or that you’re a bad one. But I am saying that chemists who haven’t actually opened the hood and gotten their hands dirty might not be the best people to write owners’ manuals, even if they might be a good technical reviewer for such a manual.
Conversely, auto mechanics shouldn’t write chemistry textbooks, and I don’t have any delusions in that regard.
(Hopefully, this comment is short enough to actually get posted.)
This is interesting. Actually, you are quite right in that TMT is an overall integrative model. It was actually designed to be a Roseatta stone, allowing us to draw findings and applications from different fields into a coherent whole. It was at one level of detail and has it uses, just as a map of the city is useful but not equivalent to a blueprint of a house (though neither are wrong). For example, it excluded nonsense solutions, which the field is rife with.
You have a naturally critical mind, which is useful, but you are taking a few short cognitive shortcuts. By what you write, it doesn’t seem like you actually read the book or the article. The article formally integrates prospect theory, under the section CPT. CPT is actually the next update to prospect theory by Kahneman and Tversky, see pages 894-895 (e.g., “Consequently, other researchers have already proposed various integrations of prospect theory with some hyperbolic time-discounting function”). Chapter three of the book is an extended review of system one and system two, including a historical review of it going back to Plato. The last three chapters then, using TMT as an organizing model, reviews all the applied science on this, ones that has been successfully used to increase self-regulaton.
What would be useful is this. What precise techniques do you take issue with? Are there any you think ineffective or too vague to be applied? Though everything was already scientifically vetted, maybe I could have been clearer in sections. Given other feedback, I found that many people needed a better walkthrough of how to apply these techniques. In the paperback version, I added a step-by-step guide. So is it the techniques or the explanation?
Alternatively, you might have some insight into specific techniques that the book neglected. This is quite possible as I didn’t want to include less developed techniques, ones without proven value. Developing a full package of self-regulatory techniques is exactly where science needs to go and why what Lesswrong is doing is quite remarkable. We don’t have this. Instead, the area of motivation is splintered into competing theories and practices, other redundant to one another or simply isolated. What we get marketed to us from the self-help arena is often out of date or even wrong. Aside from Lesswrong, I don’t know of another concerted effort to change this.
Think of the book as version 1.0. What do you want in the next upgrade? You like the basic model, which is a start. It can help direct people towards broad areas of weakness (e.g., the diagnostic test in the book; which notably accounts for about 70% of the variance in people’s procrastination scores). Then, we have a series of techniques to address these weaknesses, outlined in chapters 7, 8, and 9. What’s next? Can we expand on them? Can we refine or improve their implementation? Can we express them in ways that helps people adopt them? Can we combine them into something more powerful? These are questions worth asking.
To some extent, I can contribute to Lesswrong on a positive venture like this. It is serious, useful and noble.
Yes, I’m aware of that. I was pointing out that the additional complication of hyperbolic discounting isn’t necessary; in helping dozens of people work through procrastination difficulties, and myself through many more dozens of specific instances, hyperbolic discounting hasn’t been particularly relevant to the process. Frankly, it’s never come up. In virtually all cases, any discounting effects have been dominated by more fundamental factors like negative value perceptions, and getting rid of those perceptions means the discount on the positive value is irrelevant.
(Note that plain old prospect theory is enough to predict this: if losses count double relative to gains, you get bigger wins by reducing losses than you do increasing expected value gains.)
I don’t recall seeing anything in The Procrastination Equation that qualified in my mind as a “technique”; it looked more like “advice” to me, and I try not to deal in advice, if I can avoid it.
The distinction for me is that a technique would involve cognitive steps that would repeatably bring about a change in behavior, without requiring the steps themselves to be repeated for that particular instance of procrastination. (Or if some repetition were required, it should be an extremely simple technique!)
To my recollection, there was nothing in the book that claimed to be such, or provided claims of better results, repeatability, ease-of-training, or ease-of-use than techniques I already used or taught. That’s the criterion I use when reading self-help materials: if a technique or method isn’t claimed to be at least as good as something I’ve already tested and found useful, I don’t bother testing it.
Generally speaking, the absence of sufficiently-specific mental steps and the absence of a claim of repeatability means there’s no “technique” there, in the sense of “here are the steps to break down and clean a model 36X carburetor”. There’s just “advice” as in, “you might want to check the carburetor if your car isn’t starting”. It was this latter type of advice that I recall having found in TPE; if there was an actual technique in the book, it was quite well-hidden.
Er, nothing? ;-) I don’t care about the book. I guess from the hints you’re dropping that you’re the author? I’m not interested in having an improved set of techniques in the book, unless they claim greater ease or effectiveness along the criteria I mentioned above. I have and teach plenty of techniques that work quite well.
What my comment was saying is simply that science has not actually caught up to the in-field knowledge of people like myself who actually fix people’s procrastination. When I read books on procrastination, I use them to harvest the knowledge of other practitioners, and of course knowing about the science is nice if it leads to new ideas for practical techniques.
The reason I said your book was rubbish from a practical perspective is because it contained nothing I wasn’t teaching people in 2006, except an added fudge factor called “impulsivity”. And it ignores virtually every piece of brain mechanics that’s actually involved in fixing the types of chronic procrastination problems I help people with, such as fear of failure, stereotype threats, mis-set expectations, “should” beliefs, and so on.
Again, I could be in error on this point, it’s been a long time since I read the book, but I seem to recall it basically offered advice at the level of, “don’t think that way” or “think something else”. And in my experience, that detail level is useless for teaching someone to actually think in a particular way that resolves a problem.
It sounds to me like our goals differ in any case; note for example:
If I understand this statement correctly, our goals are actually opposed: I do not want to increase anybody’s self-regulation; I want them to naturally do the right thing, without any conscious self-regulation required. A technique I use or teach has to have the effect of altering ongoing motivation with respect to a task, preferably after a single application of the technique, and without requiring someone to change their environment or alter their incentives externally. (e.g. rewards, environment changes, etc.)
Did your book even claim to offer anything like that? If so, I missed it.
Given our difference on opinions, I think we managed to conduct this dialogue with a fair amount of decorum. However, I don’t we are going to have any agreement. I have to go with the science.
You give any group of people a perfectionism or fear of failure test along with almost any procrastination scale and you get pretty much anywhere from a negative to at best a very weak positive correlation. And if you control for self-efficacy or self-confidence, that weak correlation disappears. Science does not back you up.
Similarly, characterizing impulsiveness as a fudge factor, well that is just being silly. A simple Google Scholar search will show over 45,000 citations on the term, including the ground breaking work by George Ainslie. It really is a measure of system 1 heavy decision making, something that you yourself accept. In fact, there is enough science on it that I’m conducting a meta-analytic review. And, unlike fear of failure, you find a very strong correlation between impulsiveness and procrastination.
Now characterizing every technique that science has produced as not up to your standards is a little harsh. The book is a review of the literature. Essentially, researchers in peer-reviewed studies have conducted a variety of treatments, like stimulus control (which activates the cue sensitive system 1), and found them very effective at reducing procrastination. I organize and report what works. Since there is a thousands ways to implement stimlus control, you can describe the general methodology, report its effectiveness and give a few examples of how it can be used. If you know a better way to convery this information, I’m all ears. Of note, this is indeed an environmental fix to procrastination, one of several and not what you characterize as “don’t think that way or think something else.” Again, you come across as not having read the book.
On the other hand, I think you have been given pretty much a free ride up to this point. You make a lot of suggestions that are inconsistent with our present knowledge of the field (e.g., fear of failure). You make a quite bold claim that you have techniques that with one application will cure procrastinators, presumably by focusing solely on the expectancy or self-efficacy aspect of motivation. We can all make claims. Show me some peer-reviewed research (please, not clincial case studies).
On the longshot you might be right and have all the magic bullets, do some experimental research on it and publish it in a respectable journal. I would welcome the correction. I have a lot of research interests and would be happy to be able to focus on other things. Personally, I don’t think you actually are going to do it. Right now, you have the warm belief that the rest of us studying this field are effectively a bunch of second rates as “science has not actually caught up to the in-field knowledge of people like myself.” If you actually do the research (with proper controls, like accounting for the placebo effect which runs rampant through self-efficacy type clinical interventions), you run the risk of having a very self-satisfying set of beliefs turned into flimsy illusions. Do you really think you are willing to take that risk? Given human nature, I’m sceptical but would love to be proven wrong.
The above made me think of a paragraph that caught my eye while I was skimming through Robert Boice’s Procrastination and Blocking: A Novel, Practical Approach:
(Note: This was just an association I made. I haven’t read your book and I don’t mean to imply that you belong to the category of researchers described by Boice.)
Interesting. I skimmed the introduction and it sounds like he’s writing about the kind of procrastination I mean when I say “procrastination”. Looks potentially worth a read; thanks for the tip.
This seems like an unreasonable thing to ask of a non-academic. Based on what I hear of academia, pjeby doesn’t have a good chance of obtaining funding for a controlled study nor of publishing his results in a respectable journal even if they are as good as he claims. Or am I wrong? It would be nice if I were incorrect on either of those things.
You are probably right. It was an overly onerous requirement on my part. However, peer-reviewed is our best stamp of quality research we have and a meta-analysis is even better, comprised of hundreds of peer-reviewed research. I am passionate about science, well aware of the limitations of clincial expert opinion, and was probably too strident.
In truth, it is almost impossible for a sole practitioner to discern whether the efficaciousness of their treatment is due to the treatment itself or other apparently non-relevant aspects, such as the placebo effect or the personality of the clinician. There are some really effective clinicians out there who are successful through their innate ability to inspire. You need to do or rely on research to determine what is really going on (i.e., evidence based treatment). There really isn’t any other way (really, really, really), and unless he gets this, there is nothing he will personally experience that will make him change his mind. This isn’t new though. Research has repeated shown statistical analysis beats clinical opinion pretty much everytime (here’s one from Paul Meehl, who I studied under and was both a clinician and statistican: http://www.psych.umn.edu/faculty/grove/114meehlscontributiontoclinical.pdf).
This type of issue is never going go away though. We have everything from homeopathy to applied kinesiology, all of which where appears to work because people believe it works. The only way to separate out whether the motivational treatment is inherently effective is through research. If it is the placebo effect and you are happy with that being the source of whatever change you are seeing, then add a lot more pomp and ceremony—it ups the effect.
Heh. Doesn’t apply in my case, unless mere text on a screen qualifies as innate ability to inspire. (Most of my client work is done in text format, and I mostly try to teach people techniques which they can apply themselves.)
Really, if these clinicians are successful for this reason, then why isn’t there any research identifiying what this “innate ability” consists of, so that other clinicians can be taught to be inspiring, or conversely, there can be some sort of inspirational ability test made a qualification of licensing?
A phrase like “innate abiliity to inspire” is bad science and bad reductionism.
Ah, that’s why auto mechanics have peer-reviewed journals in order to notice whether they can really fix cars, or just have an innate ability to inspire the cars. ;-)
Can a mechanic be wrong about why a car started working, or how it was broken? Absolutely. Does it matter to the mechanic? To the car’s owner? Not very much.
I wrote a response to your post above, but the site sits and spins for several minutes every time I submit it; I guess perhaps it’s too long. I referred back to various other postings on this site, so you could get an idea of how strict LessWrong’s standards of reductionism and word usage actually are, and showing why individual falsifiability is a higher standard than peer-reviewed research, if you want a car that starts.
The type of research-based advice you’re touting, doesn’t rise to the level of individual falsifiability, because you can still say “it’s proven science” even when it doesn’t work for that particular individual. I don’t have that retreat, because I only accept as “technique” processes which can be unequivocally stated as having worked or not worked for a particular application.
My longer post also detailed the likely areas where placebo effects could exist in my work, and described some of the difficulties in formulating an appropriate control placebo for same.
So, I do understand the difference between chemistry and auto mechanics, and I’m not claiming to be a good chemist, or that you’re a bad one. But I am saying that chemists who haven’t actually opened the hood and gotten their hands dirty might not be the best people to write owners’ manuals, even if they might be a good technical reviewer for such a manual.
Conversely, auto mechanics shouldn’t write chemistry textbooks, and I don’t have any delusions in that regard.
(Hopefully, this comment is short enough to actually get posted.)