The positives first: In collecting the akrasia data, I’ve been impressed by the good reviews of your motivation techniques (as well as the lack of negative reviews for them, as contrasted with e.g. the Applied Picoeconomics technique). I found this post well-written and full of some good ideas. I like the classification proposed, I think the section on the self-help industry is particularly insightful, and most importantly, I think that the central hypothesis is a promising start, which may be valid.
The main concern I have is that this hypothesis is a fuzzy theory of the workings of the mind which appears to be more rigorous than it is: it seems to me that one could explain any experimental outcome by inventing more and more sources of conflict. What it needs to become less fuzzy is a mechanism for making specific testable predictions, preferably on topics that are feasible to test, where one doesn’t already have the results to guide one’s use of the theory, and preferably such that some of the experimental results it predicts are counterintuitive (the better to separate the hypothesis from other ones).
What you hypothesize here is interesting, it appears to be a useful framework for thinking about one’s motivational problems, and it may well be true. But I wouldn’t trust it far enough to, say, pay for a novel technique derived from this theory without having heard evidence for that specific technique’s success, or experimental evidence supporting a more precise version of the theory. If you have a genuinely less fuzzy version, or an untested counterintuitive prediction, or if you don’t claim this as anything more than a useful way to think about akrasia at present, then I have no strong objection.
P.S. I remain rather puzzled by the way you started the post: it bugged you that I was just collecting data, and not yet proposing any analyses? I don’t believe I was trying to block anyone from categorizing or analyzing the information— in fact, I should think that my post would make that easier.
The main concern I have is that this hypothesis is a fuzzy theory of the workings of the mind
Actually, I carefully avoided theorizing about the workings of the mind, in the sense of adding any theory that isn’t already more-or-less accepted science.
What I’ve stated is merely the near-tautological truth that the only reason you wouldn’t do what you intend is if there’s something stopping you—i.e., something in conflict. I then described many ways you can be in conflict.
The purpose of this post was simply to show that thinking about akrasia as if it’s a single thing is both suboptimal, and a logical confusion. Akrasia isn’t a thing, it’s a term for “something stopping you”, where the “something” is unspecified.
If you taboo “akrasia” and apply the same analysis as you are now doing, you will see that you end up right where I did. The only way you can remain confused is by thinking that “akrasia” is a real thing, rather than a labeled node in your neural network that lights up when certain other properties are present.
Or, to put it another way, your objections to the hypothesis are roughly equivalent to asking for a test to prove whether a tree falling in the forest makes a sound when there isn’t anyone hearing it. The answer depends not on the state of reality, but on the state of your definitions.
I remain rather puzzled by the way you started the post: it bugged you that I was just collecting data, and not yet proposing any analyses?
No, it was the discussion as a whole, and the apparent lack of deep thought that anybody was putting into the subject. It struck me as a bit like the blind men and the elephant parable, and I thought it might be good to enumerate the basic attributes of the elephant in the room.
The various sorts of akrasia are about as dissimilar to one another as the different parts of an elephant are, or different sorts of cancer are to each other: the fact that we have only one word for a thing, doesn’t make it one thing.
What I’ve stated is merely the near-tautological truth that the only reason you wouldn’t do what you intend is if there’s something stopping you—i.e., something in conflict. I then described many ways you can be in conflict.
If you can describe “all instincts, thoughts and drives alligned in a common purpose but significant damage to the prefrontal cortex” as ‘conflict’ then it is probably too fuzzy.
I can only make sense out of your comment if I think of akrasia as if it exists. In other words, the fuzziness is coming from the fact that you’re confused.
Akrasia is a word we use when we don’t do what we intend. However, we can only not do what we intend, if there is something between what we intend and what we do. The nature of this “something” is not specified by the term akrasia.
It’s sort of like saying “not dying” is your strategy for immortality. Akrasia is “not doing what you intend”—it’s not a thing.
That’s why you think I’m being fuzzy—because I’ve expanded akrasia to show some of the terms that could be inserted for the “something” stopping you. But the fuzziness was there all along, in the word akrasia. Don’t blame me for showing you the words you’re using are fuzzy. ;-)
[Edit to add: nothing I’m saying has anything to do with physiology or psychology. I’m just saying that saying you have “akrasia” is as vague as saying you have “Car-won’t-startia”—it tells you nothing about why the car won’t start. Did somebody steal the engine? Is there no gas? This article is an attempt to list some of the causes of car-won’t-startia, so of course it’s all over the place.]
I have no confusion about the word ‘akrasia’. I believe you are confused about just how general it is appropriate to apply your label ‘conflict’ as the cause of ‘won’t-startia’. If you apply it near-tautologically then it is too fuzzy to make any predictions. If you don’t then you must acknowledge that there are reasons for ‘won’t-startia’ that are not best described as ‘conflict’. The obvious proof of concept example is “take a perfectly RMI (etc) hacked brain and consider what happens when an unfortunate blood clot occurs in the appropriate area in the prefrontal cortex”.
Your ‘conflict’ model becomes stronger and more useful once you can specify the boundaries of where it applies.
If you don’t then you must acknowledge that there are reasons for ‘won’t-startia’ that are not best described as ‘conflict’.
Of course. Note that the original post also addresses reducing the negative effects of conflict. But it probably would’ve been better to state at some point in there, “assuming correct hardware function”.
(On the other hand, I didn’t say that for the same very good reason I discourage people from thinking about agency: in the common case, people use it to excuse their inability to do something, rather than understanding just how high a bar their evidence has to meet to qualify as either a true (and unworkaroundable) hardware problem or a true “agency”.)
There is sometimes a conflict between ‘optimal for people to believe’ and ‘most true’. The downside of presenting a model as a ‘hypothesis’ and ‘near tautological truth’ without specifying limits of what it is intended to model is that it validates objections such as the one Orthonormal has made here.
The main concern I have is that this hypothesis is a fuzzy theory of the workings of the mind which appears to be more rigorous than it is: it seems to me that one could explain any experimental outcome by inventing more and more sources of conflict.
Holding back the ‘whole truth’ from people who are actually rational also limits their options. If someone has no room in their ‘map’ to allow for actual hardware considerations then that prevents them from considering any options available to them for improving their hardware. And there are options for improving the function of the brain in some cases.
Holding back the ‘whole truth’ from people who are actually rational also limits their options.
Remember that bit in HItchhiker’s Guide where they gave the guy too much truth serum, and he started to tell The Whole Truth? That’s what I felt like while trying to write this article, trying to figure out what NOT to include. (For example, I left out how this entire classification scheme is just a routine application of Goldratt’s theory of constraints to troubleshooting any sort of problem, not just akrasia.)
There is a LOT of truth, you have to pick some place to start. And that place depends on your purpose in the telling.
In this case, the purpose of framing this idea as a hypothesis was to provide a stepping stone for people to grok something important, that’s independent of the hypothesis itself.
Specifically: that akrasia is not a thing, and that this lack-of-thingness has various real effects and consequences. The hypothesis itself is a throwaway: you could replace it with a variety of similar hypotheses, and the effect would still be the same in practical terms.
(In retrospect, it probably might have been better called a “thought experiment” than a hypothesis.)
Anyway, I had a few very narrow purposes for this post, and they would not have been served by adding too much information—the post is a bit long for LW as it is. Everything is a tradeoff.
And there are options for improving the function of the brain in some cases.
Yep, just like I listed in the very first category of methods: hygienic/systemic methods like meditation, exercise, etc. If your brain function is truly the constraint, then that’s the thing to fix.
(If I’d wanted to make a larger point about ToC—and I do in the long run, just not in this post—then I’d have explained that the categories I chose to group methods into are based on possible failure nodes in a causal chain… not unlike block-diagramming a car and classifying car-not-startia into problems of ignition, compression, air/fuel mix, etc. etc. These groupings are only partially dependent upon a notion of “conflict”. Anyway, that’s why there’s a mention of causal chains in the article’s epilog.)
The positives first: In collecting the akrasia data, I’ve been impressed by the good reviews of your motivation techniques (as well as the lack of negative reviews for them, as contrasted with e.g. the Applied Picoeconomics technique). I found this post well-written and full of some good ideas. I like the classification proposed, I think the section on the self-help industry is particularly insightful, and most importantly, I think that the central hypothesis is a promising start, which may be valid.
The main concern I have is that this hypothesis is a fuzzy theory of the workings of the mind which appears to be more rigorous than it is: it seems to me that one could explain any experimental outcome by inventing more and more sources of conflict. What it needs to become less fuzzy is a mechanism for making specific testable predictions, preferably on topics that are feasible to test, where one doesn’t already have the results to guide one’s use of the theory, and preferably such that some of the experimental results it predicts are counterintuitive (the better to separate the hypothesis from other ones).
What you hypothesize here is interesting, it appears to be a useful framework for thinking about one’s motivational problems, and it may well be true. But I wouldn’t trust it far enough to, say, pay for a novel technique derived from this theory without having heard evidence for that specific technique’s success, or experimental evidence supporting a more precise version of the theory. If you have a genuinely less fuzzy version, or an untested counterintuitive prediction, or if you don’t claim this as anything more than a useful way to think about akrasia at present, then I have no strong objection.
P.S. I remain rather puzzled by the way you started the post: it bugged you that I was just collecting data, and not yet proposing any analyses? I don’t believe I was trying to block anyone from categorizing or analyzing the information— in fact, I should think that my post would make that easier.
P.P.S. Of course, this is basically the point of Practical Advice Backed By Deep Theories, come to think of it.
Actually, I carefully avoided theorizing about the workings of the mind, in the sense of adding any theory that isn’t already more-or-less accepted science.
What I’ve stated is merely the near-tautological truth that the only reason you wouldn’t do what you intend is if there’s something stopping you—i.e., something in conflict. I then described many ways you can be in conflict.
The purpose of this post was simply to show that thinking about akrasia as if it’s a single thing is both suboptimal, and a logical confusion. Akrasia isn’t a thing, it’s a term for “something stopping you”, where the “something” is unspecified.
If you taboo “akrasia” and apply the same analysis as you are now doing, you will see that you end up right where I did. The only way you can remain confused is by thinking that “akrasia” is a real thing, rather than a labeled node in your neural network that lights up when certain other properties are present.
Or, to put it another way, your objections to the hypothesis are roughly equivalent to asking for a test to prove whether a tree falling in the forest makes a sound when there isn’t anyone hearing it. The answer depends not on the state of reality, but on the state of your definitions.
No, it was the discussion as a whole, and the apparent lack of deep thought that anybody was putting into the subject. It struck me as a bit like the blind men and the elephant parable, and I thought it might be good to enumerate the basic attributes of the elephant in the room.
The various sorts of akrasia are about as dissimilar to one another as the different parts of an elephant are, or different sorts of cancer are to each other: the fact that we have only one word for a thing, doesn’t make it one thing.
If you can describe “all instincts, thoughts and drives alligned in a common purpose but significant damage to the prefrontal cortex” as ‘conflict’ then it is probably too fuzzy.
I can only make sense out of your comment if I think of akrasia as if it exists. In other words, the fuzziness is coming from the fact that you’re confused.
Akrasia is a word we use when we don’t do what we intend. However, we can only not do what we intend, if there is something between what we intend and what we do. The nature of this “something” is not specified by the term akrasia.
It’s sort of like saying “not dying” is your strategy for immortality. Akrasia is “not doing what you intend”—it’s not a thing.
That’s why you think I’m being fuzzy—because I’ve expanded akrasia to show some of the terms that could be inserted for the “something” stopping you. But the fuzziness was there all along, in the word akrasia. Don’t blame me for showing you the words you’re using are fuzzy. ;-)
[Edit to add: nothing I’m saying has anything to do with physiology or psychology. I’m just saying that saying you have “akrasia” is as vague as saying you have “Car-won’t-startia”—it tells you nothing about why the car won’t start. Did somebody steal the engine? Is there no gas? This article is an attempt to list some of the causes of car-won’t-startia, so of course it’s all over the place.]
I have no confusion about the word ‘akrasia’. I believe you are confused about just how general it is appropriate to apply your label ‘conflict’ as the cause of ‘won’t-startia’. If you apply it near-tautologically then it is too fuzzy to make any predictions. If you don’t then you must acknowledge that there are reasons for ‘won’t-startia’ that are not best described as ‘conflict’. The obvious proof of concept example is “take a perfectly RMI (etc) hacked brain and consider what happens when an unfortunate blood clot occurs in the appropriate area in the prefrontal cortex”.
Your ‘conflict’ model becomes stronger and more useful once you can specify the boundaries of where it applies.
Of course. Note that the original post also addresses reducing the negative effects of conflict. But it probably would’ve been better to state at some point in there, “assuming correct hardware function”.
(On the other hand, I didn’t say that for the same very good reason I discourage people from thinking about agency: in the common case, people use it to excuse their inability to do something, rather than understanding just how high a bar their evidence has to meet to qualify as either a true (and unworkaroundable) hardware problem or a true “agency”.)
There is sometimes a conflict between ‘optimal for people to believe’ and ‘most true’. The downside of presenting a model as a ‘hypothesis’ and ‘near tautological truth’ without specifying limits of what it is intended to model is that it validates objections such as the one Orthonormal has made here.
Holding back the ‘whole truth’ from people who are actually rational also limits their options. If someone has no room in their ‘map’ to allow for actual hardware considerations then that prevents them from considering any options available to them for improving their hardware. And there are options for improving the function of the brain in some cases.
Remember that bit in HItchhiker’s Guide where they gave the guy too much truth serum, and he started to tell The Whole Truth? That’s what I felt like while trying to write this article, trying to figure out what NOT to include. (For example, I left out how this entire classification scheme is just a routine application of Goldratt’s theory of constraints to troubleshooting any sort of problem, not just akrasia.)
There is a LOT of truth, you have to pick some place to start. And that place depends on your purpose in the telling.
In this case, the purpose of framing this idea as a hypothesis was to provide a stepping stone for people to grok something important, that’s independent of the hypothesis itself.
Specifically: that akrasia is not a thing, and that this lack-of-thingness has various real effects and consequences. The hypothesis itself is a throwaway: you could replace it with a variety of similar hypotheses, and the effect would still be the same in practical terms.
(In retrospect, it probably might have been better called a “thought experiment” than a hypothesis.)
Anyway, I had a few very narrow purposes for this post, and they would not have been served by adding too much information—the post is a bit long for LW as it is. Everything is a tradeoff.
Yep, just like I listed in the very first category of methods: hygienic/systemic methods like meditation, exercise, etc. If your brain function is truly the constraint, then that’s the thing to fix.
(If I’d wanted to make a larger point about ToC—and I do in the long run, just not in this post—then I’d have explained that the categories I chose to group methods into are based on possible failure nodes in a causal chain… not unlike block-diagramming a car and classifying car-not-startia into problems of ignition, compression, air/fuel mix, etc. etc. These groupings are only partially dependent upon a notion of “conflict”. Anyway, that’s why there’s a mention of causal chains in the article’s epilog.)