Evidence is demanded for communicating the change in preferred decision.
You mean status quo bias, like the argument against the Many-Worlds interpretation?
If a guy on the forum says, to my disbelief, that stopping eating the cookies in my particular situation will actually make me even more overweight, I won’t be able to change my mind as a result of hearing his assertion.
It’s funny that you mention this, because I actually know of an author that says something just similar enough to that idea that you could maybe confuse what she says as meaning you should eat the cookies.
Specifically, she posits a mechanism which causes some people to eat compulsively when they believe they will not have enough food in the future, regardless of whether they’re hungry now. She actually encourages these people to keep stores of indulgence foods available in all places at all times, in order to produce a feeling of security that negates their compulsion to eat now—in effect, they can literally procrastinate on overeating, because they could now do it “any time”. There’s no particular moment at which they need to eat up because they’re about to be out of reach of food.
I bring this up because, if you heard this theory, and then misinterpreted it as meaning you should eat the cookies, then it would be quite logical for you to be quite skeptical, since it doesn’t match your experience.
However, if you simply observed your past experience of overeating and found a correlation between times when you ate cookies and a pending separation from food (e.g. when being about to go into a long meeting), I would be very disappointed for your rationality if you then chose NOT to try bringing the cookies into the meeting with you, or hiding a stash in the bathroom that you could excuse yourself for a moment to get, or even just focusing on having some right there when you get out of the meeting.
And yes, this metaphor is saying that if you think you need studies to validate things that you can observe first in your own past experience, and then test in your present, then you’ve definitely misunderstood something I’ve said.
(Btw, in case anyone asks, the author is Dr. Martha Beck and the book I’m referring to above is called The Four-Day Win.)
Specifically, she posits a mechanism which causes some people to eat compulsively when they believe they will not have enough food in the future, regardless of whether they’re hungry now. She actually encourages these people to keep stores of indulgence foods available in all places at all times, in order to produce a feeling of security that negates their compulsion to eat now—in effect, they can literally procrastinate on overeating, because they could now do it “any time”. There’s no particular moment at which they need to eat up because they’re about to be out of reach of food.
I strongly suspect that this trick wouldn’t work on me—the problem is that I’ve taught my brain to deliberately keep a step ahead of this sort of self-deception. Even if I started out by eating a whole pack of cookies, the second pack, that I was just supposed to keep available and feel the availability of, but not eat, would not feel available. If it was truly genuinely available and it was okay to eat it, I would probably eat it. If not, I couldn’t convince myself it was available.
What I may try is telling myself a true statement when I’m tempted to eat, namely that I actually do have strong food security, and I may try what I interpret as your monoidealism trick, to fill my imagination with thoughts of eating later, to convince myself of this. That might help—if the basic underlying theory of eating to avoid famine is correct. Some of the Seth Roberts paradigm suggests that other parts of our metabolism have programmed us to eat more when food is easily available. We could expect evolution to be less irrational than the taxi driver who quits early on rainy days when there are lots of fares, and works harder and longer when work is harder to come by, in order to make the same minimum every day.
Another thought is that it may be a bad situation for your diet to ever allow yourself to be in food competition with someone else—to ever have two people, at least one of whom is trying to diet, eating from the same bag of snacks in a case where the bag is not immediately refilled on being consumed.
’Tis a pity that such theories will never be tested unless the diet-book industry and its victims/prey/readers, become something other than what it is now; even if I were to post, saying this trick work, it would only be one more anecdote among millions on the Internet.
That might help—if the basic underlying theory of eating to avoid famine is correct.
IIRC, she only advocated this theory for people who were binging in response to anticipated hunger, and not as a general theory of weight loss. It’s only a tiny part of the book as a whole, which also discussed other emotional drivers for eating. Part of her process includes making a log of what you eat, at what time of day, along with what thoughts you were thinking and what emotional and physical responses you were having… along with a reason why the relevant thought might not be true.
I haven’t tried it myself—I actually didn’t buy the book for weight loss, but because I was intrigued by her hypothesis that it only takes four days to implement a habit (not 21 or 30 as traditional self-help authors claim), provided that the habit doesn’t represent any sort of threat to your existing order. For example, most people can easily learn a new route to work or school within four days of moving or changing jobs or schools.
That is, it’s only habits that conflict in some way with an existing way of doing things that are difficult to form, so her proposal is to use extremely small increments, like her own example of driving to the gym every morning for four days… but just sitting in the parking lot and not actually going in.… then going in and sitting on a bike but not exercising… etc. At each stage, four days of it is supposed to be enough to make what you’ve already been doing a non-threatening part of your routine.
I’ve used the approach to implement some small habits, but nothing major as yet. Seems promising so far.
It seems that keeping cookies constantly available so that one never feels they will be unavailable does not involve any sort of self deception. One can honestly tell oneself that they don’t have to eat the cookie now, it will still be there later.
But still, this trick might be harmful to some people. If someone instead will just eat cookies whenever they are available, without any regard to future availability, this will cause that person to eat a lot of cookies.
It might help to have some studies that say some percentage of people are the sort this technique helps, and some other percentage are the sort that are harmed, or better yet, identify some observable characteristics that predicts how a person would be affected. With this information, people can figure out of it makes sense for them to risk some time making their problem worse by having more cookies available for a time in order to maybe learn a technique that solves their problem. They might also be able to figure out if they should try some other trick they heard about first.
But still, this trick might be harmful to some people. If someone instead will just eat cookies whenever they are available, without any regard to future availability, this will cause that person to eat a lot of cookies.
And if you cross the street, you might get hit by a car. This sort of reasoning goes on and on but never gets you anywhere. If you don’t want to do something, you can always find a reason.
Sure, that doesn’t mean that ANY reason is valid, or that EVERY objection is invalid. However, that too is another step in a chain of reasoning that never ends, and never results in taking action. Instead, you will simply wait and wait and wait for someone else to validate your taking action. So if you don’t take status quo bias into consideration, then you have no protection against your own confabulation, because the only way to break the confabulation deadlock is to actually DO something.
Even if you don’t know what the hell you’re doing and try things randomly, you’ll improve as long as there’s some empirical way to measure your results, and the costs of your inevitable mistakes and failures are acceptable. Hell, look at how far evolution got, doing basically that. An intelligent human being can certainly do better… but ONLY by doing something besides thinking.
After all, the platform human thinking runs on is not reliable. In principle, LWers and I agree on this. But in practice, LWers argue as if their brains were reliable reasoning machines, instead of arguing machines!
I learned the hard way that my brain’s confabulation—“reasoning”—is not reliable when it is not subjected to empirical testing. It works well for reducing evidence to pattern and playing with explanatory models, but it’s lousy at coming up with ideas in the first place, and even worse at noticing whether those ideas are any good for anything but sounding convincing.
One of my pet sayings is that “amateurs guess, professionals test”. But “test” in the instrumental world does not mean a review of statistics from double-blind experiments. It means actually testing the thing you are trying to build or fix, in as close to the actual use environment as practical. If my mechanic said statistics show it’s 27% likely that the problem with my car is in the spark plugs, but didn’t actually test them, I’d best get another mechanic!
The best that statistics can do for the mechanic is to mildly optimize what tests should be done first… but you could get almost as much optimization by testing in easiest-first order.
It might help to have some studies that say some percentage of people are the sort this technique helps, and some other percentage are the sort that are harmed, or better yet, identify some observable characteristics that predicts how a person would be affected.
And where, pray tell is this information going to come from if nobody tries anything? If the Extreme Rationalist position is only to try things that have been validated on other people first, what happens when everybody is an extreme rationalist? Doesn’t sound very scalable to me.
And if you cross the street, you might get hit by a car. This sort of reasoning goes on and on but never gets you anywhere. If you don’t want to do something, you can always find a reason.
The fact that one might get hit by a car when crossing the street does not prevent people from crossing the street; it causes them to actually look to see if cars are coming before crossing the street. Did you miss the part where I talked about how the evidence might convince someone the risk is acceptable in their case? Or how it might help them to compare it to another trick that is less risky or more likely to work for them that they could try first?
And where, pray tell is this information going to come from if nobody tries anything? If the Extreme Rationalist position is only to try things that have been validated on other people first, what happens when everybody is an extreme rationalist? Doesn’t sound very scalable to me.
Getting volunteer subjects for a study is different than announcing a trick on the internet and expecting people to try it.
Where do you think the data is going to come from if people just try it on their own? How are you going to realize if you have suggested a trick that doesn’t work, or that only works for some people, if you accept all anecdotal success stories as confirming its effectiveness, but reject all reports of failure because people just make excuses?
How are you going to realize if you have suggested a trick that doesn’t work, or that only works for some people, if you accept all anecdotal success stories as confirming its effectiveness, but reject all reports of failure because people just make excuses?
I can only assume you’re implying that that’s what I do. But as I’ve already stated, when someone has performed a technique to my satisfaction, and it still doesn’t work, I have them try something else. I don’t just say, “oh well, tough luck, and it’s your fault”.
There are only a few possibilities regarding an explanation of why “different things work for different people”:
Some things only work on some people, and this is an unchanging trait attributable to the people themselves,
Some things only work on certain kinds of problems, and many problems superficially sound similar but are actually different in their mechanism of operation (so that technique A works on problem A1 but not A2, and the testers/experimenters have not yet discerned the difference between A1 and A2), and
Some people have an easier time of learning how to do some things than others, depending in part on how the thing is explained, and what prior beliefs, understandings, etc. they have. (So that even though a test of technique A is being performed, in practice one is testing an unknown set of variant techniques A1, A2,...)
On LW, #1 is a popular explanation, but I have seen much more evidence that makes sense for #2 and #3. (For example, not being able to apply a technique and then later learning it supports #3, and discovering a criterion that predicts which of two techniques will be more likely to work for a given problem supports #2.)
Of course, I cannot 100% rule out the possibility that #1 could be true, but it seems like pretty long odds to me. There are so many clear-cut cases of #2 and #3, that barring actual brain damage or defect, #1 seems like adding unnecessary entities to one’s model, without any theoretical or empirical justification whatsoever.
More than that, it sounds exactly like attribution error, and an instance of Dweck’s “fixed” mindset as well. In other words, we can expect belief in #1 to be associated with a mindset that is highly correlated with consistent difficulty and stress in the corresponding field.
That’s why I consider view #1 to be bad instrumental hygiene as well as not that likely to be true anyway. It’s a horrible negative self-prime to saddle yourself with.
Actually, it might make more sense to try to figure out why it works some times and not others, which can even happen in the same person… like, uh, me. If I’m careful about what I keep in the house, I wind up gorging on anything tasty almost as soon as I get it, and buying less healthy ‘treats’ more often (‘just this once’, repeatedly) when I go out. If I keep goodies at home, I’ll ignore them for a while, but then decide something along the lines of “it’d be a shame to let this go to waste” and eat them anyway.
There are different mental states involved in each of those situations, but I don’t know what triggers the switch from one to another.
You mean status quo bias, like the argument against the Many-Worlds interpretation?
I mean the argument being too weak to change one’s mind about a decision. It communicates the info, changes the level of certainty (a bit), but it doesn’t flip the switch.
You mean status quo bias, like the argument against the Many-Worlds interpretation?
It’s funny that you mention this, because I actually know of an author that says something just similar enough to that idea that you could maybe confuse what she says as meaning you should eat the cookies.
Specifically, she posits a mechanism which causes some people to eat compulsively when they believe they will not have enough food in the future, regardless of whether they’re hungry now. She actually encourages these people to keep stores of indulgence foods available in all places at all times, in order to produce a feeling of security that negates their compulsion to eat now—in effect, they can literally procrastinate on overeating, because they could now do it “any time”. There’s no particular moment at which they need to eat up because they’re about to be out of reach of food.
I bring this up because, if you heard this theory, and then misinterpreted it as meaning you should eat the cookies, then it would be quite logical for you to be quite skeptical, since it doesn’t match your experience.
However, if you simply observed your past experience of overeating and found a correlation between times when you ate cookies and a pending separation from food (e.g. when being about to go into a long meeting), I would be very disappointed for your rationality if you then chose NOT to try bringing the cookies into the meeting with you, or hiding a stash in the bathroom that you could excuse yourself for a moment to get, or even just focusing on having some right there when you get out of the meeting.
And yes, this metaphor is saying that if you think you need studies to validate things that you can observe first in your own past experience, and then test in your present, then you’ve definitely misunderstood something I’ve said.
(Btw, in case anyone asks, the author is Dr. Martha Beck and the book I’m referring to above is called The Four-Day Win.)
I strongly suspect that this trick wouldn’t work on me—the problem is that I’ve taught my brain to deliberately keep a step ahead of this sort of self-deception. Even if I started out by eating a whole pack of cookies, the second pack, that I was just supposed to keep available and feel the availability of, but not eat, would not feel available. If it was truly genuinely available and it was okay to eat it, I would probably eat it. If not, I couldn’t convince myself it was available.
What I may try is telling myself a true statement when I’m tempted to eat, namely that I actually do have strong food security, and I may try what I interpret as your monoidealism trick, to fill my imagination with thoughts of eating later, to convince myself of this. That might help—if the basic underlying theory of eating to avoid famine is correct. Some of the Seth Roberts paradigm suggests that other parts of our metabolism have programmed us to eat more when food is easily available. We could expect evolution to be less irrational than the taxi driver who quits early on rainy days when there are lots of fares, and works harder and longer when work is harder to come by, in order to make the same minimum every day.
Another thought is that it may be a bad situation for your diet to ever allow yourself to be in food competition with someone else—to ever have two people, at least one of whom is trying to diet, eating from the same bag of snacks in a case where the bag is not immediately refilled on being consumed.
’Tis a pity that such theories will never be tested unless the diet-book industry and its victims/prey/readers, become something other than what it is now; even if I were to post, saying this trick work, it would only be one more anecdote among millions on the Internet.
IIRC, she only advocated this theory for people who were binging in response to anticipated hunger, and not as a general theory of weight loss. It’s only a tiny part of the book as a whole, which also discussed other emotional drivers for eating. Part of her process includes making a log of what you eat, at what time of day, along with what thoughts you were thinking and what emotional and physical responses you were having… along with a reason why the relevant thought might not be true.
I haven’t tried it myself—I actually didn’t buy the book for weight loss, but because I was intrigued by her hypothesis that it only takes four days to implement a habit (not 21 or 30 as traditional self-help authors claim), provided that the habit doesn’t represent any sort of threat to your existing order. For example, most people can easily learn a new route to work or school within four days of moving or changing jobs or schools.
That is, it’s only habits that conflict in some way with an existing way of doing things that are difficult to form, so her proposal is to use extremely small increments, like her own example of driving to the gym every morning for four days… but just sitting in the parking lot and not actually going in.… then going in and sitting on a bike but not exercising… etc. At each stage, four days of it is supposed to be enough to make what you’ve already been doing a non-threatening part of your routine.
I’ve used the approach to implement some small habits, but nothing major as yet. Seems promising so far.
It seems that keeping cookies constantly available so that one never feels they will be unavailable does not involve any sort of self deception. One can honestly tell oneself that they don’t have to eat the cookie now, it will still be there later.
But still, this trick might be harmful to some people. If someone instead will just eat cookies whenever they are available, without any regard to future availability, this will cause that person to eat a lot of cookies.
It might help to have some studies that say some percentage of people are the sort this technique helps, and some other percentage are the sort that are harmed, or better yet, identify some observable characteristics that predicts how a person would be affected. With this information, people can figure out of it makes sense for them to risk some time making their problem worse by having more cookies available for a time in order to maybe learn a technique that solves their problem. They might also be able to figure out if they should try some other trick they heard about first.
And if you cross the street, you might get hit by a car. This sort of reasoning goes on and on but never gets you anywhere. If you don’t want to do something, you can always find a reason.
Sure, that doesn’t mean that ANY reason is valid, or that EVERY objection is invalid. However, that too is another step in a chain of reasoning that never ends, and never results in taking action. Instead, you will simply wait and wait and wait for someone else to validate your taking action. So if you don’t take status quo bias into consideration, then you have no protection against your own confabulation, because the only way to break the confabulation deadlock is to actually DO something.
Even if you don’t know what the hell you’re doing and try things randomly, you’ll improve as long as there’s some empirical way to measure your results, and the costs of your inevitable mistakes and failures are acceptable. Hell, look at how far evolution got, doing basically that. An intelligent human being can certainly do better… but ONLY by doing something besides thinking.
After all, the platform human thinking runs on is not reliable. In principle, LWers and I agree on this. But in practice, LWers argue as if their brains were reliable reasoning machines, instead of arguing machines!
I learned the hard way that my brain’s confabulation—“reasoning”—is not reliable when it is not subjected to empirical testing. It works well for reducing evidence to pattern and playing with explanatory models, but it’s lousy at coming up with ideas in the first place, and even worse at noticing whether those ideas are any good for anything but sounding convincing.
One of my pet sayings is that “amateurs guess, professionals test”. But “test” in the instrumental world does not mean a review of statistics from double-blind experiments. It means actually testing the thing you are trying to build or fix, in as close to the actual use environment as practical. If my mechanic said statistics show it’s 27% likely that the problem with my car is in the spark plugs, but didn’t actually test them, I’d best get another mechanic!
The best that statistics can do for the mechanic is to mildly optimize what tests should be done first… but you could get almost as much optimization by testing in easiest-first order.
And where, pray tell is this information going to come from if nobody tries anything? If the Extreme Rationalist position is only to try things that have been validated on other people first, what happens when everybody is an extreme rationalist? Doesn’t sound very scalable to me.
The fact that one might get hit by a car when crossing the street does not prevent people from crossing the street; it causes them to actually look to see if cars are coming before crossing the street. Did you miss the part where I talked about how the evidence might convince someone the risk is acceptable in their case? Or how it might help them to compare it to another trick that is less risky or more likely to work for them that they could try first?
Getting volunteer subjects for a study is different than announcing a trick on the internet and expecting people to try it.
Where do you think the data is going to come from if people just try it on their own? How are you going to realize if you have suggested a trick that doesn’t work, or that only works for some people, if you accept all anecdotal success stories as confirming its effectiveness, but reject all reports of failure because people just make excuses?
I can only assume you’re implying that that’s what I do. But as I’ve already stated, when someone has performed a technique to my satisfaction, and it still doesn’t work, I have them try something else. I don’t just say, “oh well, tough luck, and it’s your fault”.
There are only a few possibilities regarding an explanation of why “different things work for different people”:
Some things only work on some people, and this is an unchanging trait attributable to the people themselves,
Some things only work on certain kinds of problems, and many problems superficially sound similar but are actually different in their mechanism of operation (so that technique A works on problem A1 but not A2, and the testers/experimenters have not yet discerned the difference between A1 and A2), and
Some people have an easier time of learning how to do some things than others, depending in part on how the thing is explained, and what prior beliefs, understandings, etc. they have. (So that even though a test of technique A is being performed, in practice one is testing an unknown set of variant techniques A1, A2,...)
On LW, #1 is a popular explanation, but I have seen much more evidence that makes sense for #2 and #3. (For example, not being able to apply a technique and then later learning it supports #3, and discovering a criterion that predicts which of two techniques will be more likely to work for a given problem supports #2.)
Of course, I cannot 100% rule out the possibility that #1 could be true, but it seems like pretty long odds to me. There are so many clear-cut cases of #2 and #3, that barring actual brain damage or defect, #1 seems like adding unnecessary entities to one’s model, without any theoretical or empirical justification whatsoever.
More than that, it sounds exactly like attribution error, and an instance of Dweck’s “fixed” mindset as well. In other words, we can expect belief in #1 to be associated with a mindset that is highly correlated with consistent difficulty and stress in the corresponding field.
That’s why I consider view #1 to be bad instrumental hygiene as well as not that likely to be true anyway. It’s a horrible negative self-prime to saddle yourself with.
Actually, it might make more sense to try to figure out why it works some times and not others, which can even happen in the same person… like, uh, me. If I’m careful about what I keep in the house, I wind up gorging on anything tasty almost as soon as I get it, and buying less healthy ‘treats’ more often (‘just this once’, repeatedly) when I go out. If I keep goodies at home, I’ll ignore them for a while, but then decide something along the lines of “it’d be a shame to let this go to waste” and eat them anyway.
There are different mental states involved in each of those situations, but I don’t know what triggers the switch from one to another.
I mean the argument being too weak to change one’s mind about a decision. It communicates the info, changes the level of certainty (a bit), but it doesn’t flip the switch.