Perhaps there are intuitive notions of “less wrong” that are different from “more right”, but in a technical sense, they seem to be the same:
At this point it may occur to some readers that there’s an obvious way to achieve perfect calibration—just flip a coin for every yes-or-no question, and assign your answer a confidence of 50%. You say 50% and you’re right half the time. Isn’t that perfect calibration? Yes. But calibration is only one component of our Bayesian score; the other component is discrimination.
Suppose I ask you ten yes-or-no questions. You know absolutely nothing about the subject, so on each question you divide your probability mass fifty-fifty between “Yes” and “No”. Congratulations, you’re perfectly calibrated—answers for which you said “50% probability” were true exactly half the time. This is true regardless of the sequence of correct answers or how many answers were Yes. In ten experiments you said “50%” on twenty occasions—you said “50%” to Yes-1, No-1; Yes-2, No-2; …. On ten of those occasions the answer was correct, the occasions: Yes-1; No-2; No-3; …. And on ten of those occasions the answer was incorrect: No-1; Yes-2; Yes-3; …
Now I give my own answers, putting more effort into it, trying to discriminate whether Yes or No is the correct answer. I assign 90% confidence to each of my favored answers, and my favored answer is wrong twice. I’m more poorly calibrated than you. I said “90%” on ten occasions and I was wrong two times. The next time someone listens to me, they may mentally translate “90%” into 80%, knowing that when I’m 90% sure I’m right about 80% of the time. But the probability you assigned to the final outcome is 1⁄2 to the tenth power, 0.001 or 1/1024. The probability I assigned to the final outcome is 90% to the eighth power times 10% to the second power, (0.9^8)*(0.1^2), which works out to 0.004 or 0.4%. Your calibration is perfect and mine isn’t, but my better discrimination between right and wrong answers more than makes up for it. My final score is higher—I assigned a greater joint probability to the final outcome of the entire experiment. If I’d been less overconfident and better calibrated, the probability I assigned to the final outcome would have been 0.8^8 * 0.2^2, 0.006.
Accounting for the uncertainty in your own mind only gets you so far, to a certain minimum of wrongness. To do better, to be less wrong, you have to actually be right about the rest of the universe outside your mind.
Perhaps there are intuitive notions of “less wrong” that are different from “more right”, but in a technical sense, they seem to be the same:
True but irrelevant; this is psychology, not probability theory. Intuitively, to a first approximation, beliefs are either affirmed or not, and there’s a difference between affirming fewer false beliefs and more true ones.
The fact that psychology can explain how the phrase “less wrong” can be misunderstood does not mean that the misunderstanding is the correct way to interpret that phrase when used by an online community that uses psychology, as well as probability theory, to inform the development of rationality. It does not make sense to interpret the title of our site with the very naivety that we seek to overcome.
It does not make sense to interpret the title of our site with the very naivety that we seek to overcome.
That’s what I’ve been saying, actually. Except that the naivety in question is the belief that brains do probability or utility, when it’s well established that humans can have both utility and disutility, that they’re not the same thing, and that human behavior about them is different. You know, all that loss/win framing stuff?
It’s not rational to expect human beings to treat “less wrong” as meaning the same thing (in behavioral terms) as “more right”. Avoiding wrongness has different emotional affect and different prioritization of behavior and thought than approaching rightness. Think “avoiding a predator” versus “hunting for food”.
The idea that we can simultaneously have approach and avoidance behaviors and they’re differently-motivating is backed by a (yes, peer-reviewed) concept called affective asynchrony. Strong negative or strong positive emotions can switch off the other system, but for the most part, they operate independently. And mistake-avoidance motivation reduces creativity, independence, risk-taking, etc.
Heck, I’d be willing to bet some actual cash money that a controlled experiment would show significant behavioral differences between people primed with the terms “less wrong” and “more right”, no matter how “rational” they rate themselves to be.
Perhaps there are intuitive notions of “less wrong” that are different from “more right”
You bet: there’s the one where you can be “less wrong” by never believing anything, because there are more possible false beliefs than true ones. You have now achieved perfect less-wrongness, at the cost of never having any more-rightness.
You missed the point. The intuitive meaning of “less wrong” you describe is a caricature of the ideal of this community.
If by “never believing anything”, you mean “don’t assign any probability to any event”, well then we give a person who does that a score of negative infinity, as wrong as it gets.
If you mean they evenly distribute the probability mass amongst all possibilities, that is what we consider maximum entropy, a standard so low that anything worse might be considered “reversed intelligence”. As Eliezer explained in the source I quoted earlier, we want to do better than freely confessed ignorance. We really do want actual intelligence, not just the absence of its opposite.
The intuitive meaning of “less wrong” you describe is a caricature of the ideal of this community.
It’s not a caricature of the actual behavior of many of its members.… which notably does not live up to that ideal.
If by “never believing anything”, you mean “don’t assign any probability to any event”, well then we give a person who does that a score of negative infinity, as wrong as it gets.
No, I mean choosing to never consider the possibility that you might be utterly, horribly wrong in your certainty about what things are true… especially with respect to the things we would prefer to believe are true about ourselves and others.
A segment of LW culture applauds the detection and management of superficial biases while being ludicrously blind to the massive bias of the very framework it operates in: the one where truth and reason must prevail at all costs, and where the idea of believing something false—even for a moment, even in a higher cause, is unthinkable.
Is that a caricature of the Bayesian ideal? No kidding. But I’m not the one who’s drawing it.
As Eliezer explained in the source I quoted earlier, we want to do better than freely confessed ignorance. We really do want actual intelligence, not just the absence of its opposite.
What I’m specifically referring to here is the brigade whose favorite argument is that something or other isn’t yet proven “true”, and that they should therefore not try it… especially if they spend more time writing about why they shouldn’t try something, than it would take them to try it.
Heck, not just why they shouldn’t try something, but why noone should ever try anything that isn’t proven. Why, thinking a new thought might be dangerous!
And yes, someone actually argued that, in the context of a thread talking about purely-mental experiments that basically amounted to thinking. (Sure, they left themselves weasel room to argue that they weren’t saying thoughts were dangerous, and yet they still used it as a fully general argument, applied to the specific case of experimenting with a thought process.)
What’s that saying about how, if given a choice between changing their mind and trying to prove they don’t need to, most people get busy on the proof?
So, “never believing anything” means having unwavering certainty?
What I’m specifically referring to here is the brigade whose favorite argument is that something or other isn’t yet proven “true”, and that they should therefore not try it… especially if they spend more time writing about why they shouldn’t try something, than it would take them to try it.
Without knowing what “brigade” or techniques you are referring to, I have to wonder if the people involved were not looking for some absolute proof, but for evidence that a given technique is more likely to be helpful than harmful. Or maybe some of them had tried a bunch of techniques presented with similar claims, with no success, and decided they want to have a good reason for trying out a particular technique instead of the many similar techniques one might propose. They might even think that, if they knew the reasons that someone was proposing them, they might understand the technique better and use it more effectively, or even see that the reasoning was not quite right, but if they fix it, it suggests that a similar technique might work. They might not actually have the particular problem the technique is supposed to solve, and are seeking evidence about if it works for people who do have the problem.
I have to wonder if the people involved were not looking for some absolute proof, but for evidence that a given technique is more likely to be helpful than harmful.
Good point, but a priori I wouldn’t expect a self-help technique to be harmful in a way that’s either hard to notice or hard to reverse. Can you think of some that are, especially ones where it would be hard to predict the harm beforehand from evidence specific to the technique?
Or maybe some of them had tried a bunch of techniques presented with similar claims, with no success, and decided they want to have a good reason for trying out a particular technique instead of the many similar techniques one might propose.
Not wanting to invest time and effort can be a good reason (then again, if you have time to argue at length in comment threads...), but the existence of similar techniques shouldn’t matter. A greater number of options has been shown to lead to less willingness to choose anything (e.g.); beware. (FWIW, I suspect this has to do with a general heuristic to do the most defensible thing instead of the best thing.)
They might even think that, if they knew the reasons that someone was proposing them, they might understand the technique better and use it more effectively, or even see that the reasoning was not quite right, but if they fix it, it suggests that a similar technique might work.
Strongly agreed. Generally, though, I agree with pjeby’s conclusion (tentatively, but only because so many others here disagree).
Good point, but a priori I wouldn’t expect a self-help technique to be harmful in a way that’s either hard to notice or hard to reverse. Can you think of some that are, especially ones where it would be hard to predict the harm beforehand from evidence specific to the technique?
So, you want an example a technique that I can argue is harmful, but it is difficult to predict that harm? You want a known unknown unknown? I don’t think I can provide that. But if you look at my assessment of the the keeping cookies available trick, I explain how there is some possibility of harm and what kinds of evidence one might use to evaluate if the risk is worth the potential benifet.
Not wanting to invest time and effort can be a good reason (then again, if you have time to argue at length in comment threads...), but the existence of similar techniques shouldn’t matter.
Suppose you have 10 tricks that you might try to solve a particular problem, and that it might take a day to try one trick and evaluate if it worked for you. Would it be a good idea to spend some time to figure out if one of the tricks stands out from the other as more likely to work, either generally or for you in particular? Being able to systematically try the trick that is well supported and understood has some value. Or, if in discussing one aspect of the trick that you think would never work, and it turns out you were right, what you understood would not work, and the actually trick is something different, you have not just saved a lot of time, you have prevented yourself from losing the opportunity to try the real trick.
So, you want an example a technique that I can argue is harmful, but it is difficult to predict that harm? You want a known unknown unknown? I don’t think I can provide that. But if you look at my assessment of the the keeping cookies available trick, I explain how there is some possibility of harm and what kinds of evidence one might use to evaluate if the risk is worth the potential benifet.
No, an example of a technique that is harmful, but whose harm would have been difficult for a reasonable person to predict in advance. The potential downside of the cookie trick is easy to notice and easy to reverse (well, I guess you can’t easily reverse gaining epsilon weight, but you can limit it to epsilon), so as a reason not to try it’s very weak.
Would it be a good idea to spend some time to figure out if one of the tricks stands out from the other as more likely to work, either generally or for you in particular? Being able to systematically try the trick that is well supported and understood has some value. Or, if in discussing one aspect of the trick that you think would never work, and it turns out you were right, what you understood would not work, and the actually trick is something different, you have not just saved a lot of time, you have prevented yourself from losing the opportunity to try the real trick.
I take my point back. If you can only try one thing, it makes sense to just act if there is only one option, but to demand a good reason before wasting your chance if there are multiple options. (Formally, this is because the opportunity cost of failure is greater in the latter case.) Realistically, “willpower to engage in psychological modification” seems like it would often be a limiting factor producing this effect; still, I would expect irrational choice avoidance to be a factor in many cases of people demanding a reason to favor one option.
Without knowing what “brigade” or techniques you are referring to, I have to wonder if the people involved were not looking for some absolute proof, but for evidence that a given technique is more likely to be helpful than harmful.
My point is that this is argument is fully general. What is the evidence that empirical rationality is more likely to be helpful than harmful? If we look at the evidence presented here, where more evidence is presented more often that being too rational can hurt your happiness and effectiveness, why is this not treated as a reason to wait for more study of this whole “rationality” business to make sure it’s more helpful than harmful?
It’s really ironic that optimism is as much a mind-killer here as politics and religion. Hell, the fact that religion can be shown to have empirically positive effects on people’s lives is often viewed here as a depressing problem, rather than an opportunity to learn something about how brains work. The problem of understanding the god-shaped hole is something people talk a lot about, but very few people are actually doing anything about it.
What is the evidence that empirical rationality is more likely to be helpful than harmful?
Well, we have reliable agriculture, reasonably effective transportation infrastructure, flight, telecommunication, powerful computers with lots of useful software because lots of people worked hard to believe things that are true, and used those true beliefs to figure out how to accomplish their goals.
Asking for evidence is not a fully general counterargument. It distinguishes arguments that have evidence in their favor from those which don’t.
And keep in mind, the social process of science, which you seem to think holds you to too high of a standard, is not merely interested in your ideas being right, but that you have effectively communicated them to the extent that other people can verify that they are right. You might discover the greatest anti-akrasia trick ever, but if you can’t explain it so that other people can use it and have it work even if you are not there to guide the process, then you have only helped yourself and your clients, and talking about it here is not helping. Of course, you could take the opportunity to figure out how to explain it better, though it would require you to “consider the possibility that you might be utterly, horribly wrong in your certainty about what things are true”.
And keep in mind, the social process of science, which you seem to think holds you to too high of a standard, is not merely interested in your ideas being right, but that you have effectively communicated them to the extent that other people can verify that they are right.
Two things I forgot in my other reply: first, testing on yourself is a higher standard than peer review, if your purpose is to find something that works for you.
Second, if this actually were about “my” ideas (and it isn’t), I’ve certainly effectively communicated many of them to the extent of verifiability, since many people have reported here and elsewhere about their experiments with them.
But very few of “my” ideas are new in any event—I have a few new approaches to presentation or learning, sure, maybe some new connections between fields (ev psych + priming + somatic markers + memory-prediction framework + memory reconsolidation, etc.), and a relatively-new emphasis on real-time, personal empirical testing. (I say relatively new because Bandler was advocating extreme testing of this sort 20+ years ago, but for some reason it never caught on in the field at large.)
And I’m not aware that any of these ideas is particularly controversial in the scientific community. Nobody ’s pushing for more individual empirical testing per se, but the “brief therapy” movement that resulted in things like CBT is certainly more focused that direction than before.
(The reason I stopped even bothering to write about any of that, though, is simply that I ended up in some sort of weird loop where people insist on references, and then ignore the ones I supply, even when they’re online papers or Wikipedia. Is it any wonder that I would then conclude they didn’t really want the references?)
Well, we have reliable agriculture, reasonably effective transportation infrastructure, flight, telecommunication, powerful computers with lots of useful software because lots of people worked hard to believe things that are true, and used those true beliefs to figure out how to accomplish their goals.
Those are the products of rationalism. I’m asking about evidence that the practice of (extreme) rationalism produces positive effects in the lives of the people who practice it, not the benefits that other people get from having a minority of humans practice it.
Asking for evidence is not a fully general counterargument. It distinguishes arguments that have evidence in their favor from those which don’t.
It is if you also apply the status quo bias to choose which evidence to count.
You might discover the greatest anti-akrasia trick ever, but if you can’t explain it so that other people can use it and have it work even if you are not there to guide the process, then you have only helped yourself and your clients, and talking about it here is not helping
I really wish people wouldn’t conflate the discussion of learning and attitude in general with the issue of specific techniques. There is plenty of evidence for how attitudes (of both student and teacher) affect learning, yet somehow the subject remains quite controversial here.
(Edited to say “extreme rationalism”, as suggested by Nick Tarleton.)
I’m asking about evidence that the practice of rationalism produces positive effects in the lives of the people who practice it, not the benefits that other people get from having a minority of humans practice it.
My point is that this is argument is fully general. What is the evidence that empirical rationality is more likely to be helpful than harmful? If we look at the evidence presented here, where more evidence is presented more often that being too rational can hurt your happiness and effectiveness, why is this not treated as a reason to wait for more study of this whole “rationality” business to make sure it’s more helpful than harmful?
Evidence is demanded for communicating the change in preferred decision.
If I like eating cookies, and so choose to eat cookies, it takes at least a deliberative thought to change my mind. I may have all the data, but changing a decision requires considering it. I may realize that I’m getting overweight, and that most of my calories come from cookies, so I change my mind and start preferring the decision of not eating cookies.
If a guy on the forum says, to my disbelief, that stopping eating the cookies in my particular situation will actually make me even more overweight, I won’t be able to change my mind as a result of hearing his assertion. I consider what it’d take to change my mind, and present him with a constructive request: find a few good studies supporting your claims, and show them to me. That’s what it takes to change my mind, and I can think of no other obvious way for him to convince me to change this decision.
Evidence is demanded for communicating the change in preferred decision.
You mean status quo bias, like the argument against the Many-Worlds interpretation?
If a guy on the forum says, to my disbelief, that stopping eating the cookies in my particular situation will actually make me even more overweight, I won’t be able to change my mind as a result of hearing his assertion.
It’s funny that you mention this, because I actually know of an author that says something just similar enough to that idea that you could maybe confuse what she says as meaning you should eat the cookies.
Specifically, she posits a mechanism which causes some people to eat compulsively when they believe they will not have enough food in the future, regardless of whether they’re hungry now. She actually encourages these people to keep stores of indulgence foods available in all places at all times, in order to produce a feeling of security that negates their compulsion to eat now—in effect, they can literally procrastinate on overeating, because they could now do it “any time”. There’s no particular moment at which they need to eat up because they’re about to be out of reach of food.
I bring this up because, if you heard this theory, and then misinterpreted it as meaning you should eat the cookies, then it would be quite logical for you to be quite skeptical, since it doesn’t match your experience.
However, if you simply observed your past experience of overeating and found a correlation between times when you ate cookies and a pending separation from food (e.g. when being about to go into a long meeting), I would be very disappointed for your rationality if you then chose NOT to try bringing the cookies into the meeting with you, or hiding a stash in the bathroom that you could excuse yourself for a moment to get, or even just focusing on having some right there when you get out of the meeting.
And yes, this metaphor is saying that if you think you need studies to validate things that you can observe first in your own past experience, and then test in your present, then you’ve definitely misunderstood something I’ve said.
(Btw, in case anyone asks, the author is Dr. Martha Beck and the book I’m referring to above is called The Four-Day Win.)
Specifically, she posits a mechanism which causes some people to eat compulsively when they believe they will not have enough food in the future, regardless of whether they’re hungry now. She actually encourages these people to keep stores of indulgence foods available in all places at all times, in order to produce a feeling of security that negates their compulsion to eat now—in effect, they can literally procrastinate on overeating, because they could now do it “any time”. There’s no particular moment at which they need to eat up because they’re about to be out of reach of food.
I strongly suspect that this trick wouldn’t work on me—the problem is that I’ve taught my brain to deliberately keep a step ahead of this sort of self-deception. Even if I started out by eating a whole pack of cookies, the second pack, that I was just supposed to keep available and feel the availability of, but not eat, would not feel available. If it was truly genuinely available and it was okay to eat it, I would probably eat it. If not, I couldn’t convince myself it was available.
What I may try is telling myself a true statement when I’m tempted to eat, namely that I actually do have strong food security, and I may try what I interpret as your monoidealism trick, to fill my imagination with thoughts of eating later, to convince myself of this. That might help—if the basic underlying theory of eating to avoid famine is correct. Some of the Seth Roberts paradigm suggests that other parts of our metabolism have programmed us to eat more when food is easily available. We could expect evolution to be less irrational than the taxi driver who quits early on rainy days when there are lots of fares, and works harder and longer when work is harder to come by, in order to make the same minimum every day.
Another thought is that it may be a bad situation for your diet to ever allow yourself to be in food competition with someone else—to ever have two people, at least one of whom is trying to diet, eating from the same bag of snacks in a case where the bag is not immediately refilled on being consumed.
’Tis a pity that such theories will never be tested unless the diet-book industry and its victims/prey/readers, become something other than what it is now; even if I were to post, saying this trick work, it would only be one more anecdote among millions on the Internet.
That might help—if the basic underlying theory of eating to avoid famine is correct.
IIRC, she only advocated this theory for people who were binging in response to anticipated hunger, and not as a general theory of weight loss. It’s only a tiny part of the book as a whole, which also discussed other emotional drivers for eating. Part of her process includes making a log of what you eat, at what time of day, along with what thoughts you were thinking and what emotional and physical responses you were having… along with a reason why the relevant thought might not be true.
I haven’t tried it myself—I actually didn’t buy the book for weight loss, but because I was intrigued by her hypothesis that it only takes four days to implement a habit (not 21 or 30 as traditional self-help authors claim), provided that the habit doesn’t represent any sort of threat to your existing order. For example, most people can easily learn a new route to work or school within four days of moving or changing jobs or schools.
That is, it’s only habits that conflict in some way with an existing way of doing things that are difficult to form, so her proposal is to use extremely small increments, like her own example of driving to the gym every morning for four days… but just sitting in the parking lot and not actually going in.… then going in and sitting on a bike but not exercising… etc. At each stage, four days of it is supposed to be enough to make what you’ve already been doing a non-threatening part of your routine.
I’ve used the approach to implement some small habits, but nothing major as yet. Seems promising so far.
It seems that keeping cookies constantly available so that one never feels they will be unavailable does not involve any sort of self deception. One can honestly tell oneself that they don’t have to eat the cookie now, it will still be there later.
But still, this trick might be harmful to some people. If someone instead will just eat cookies whenever they are available, without any regard to future availability, this will cause that person to eat a lot of cookies.
It might help to have some studies that say some percentage of people are the sort this technique helps, and some other percentage are the sort that are harmed, or better yet, identify some observable characteristics that predicts how a person would be affected. With this information, people can figure out of it makes sense for them to risk some time making their problem worse by having more cookies available for a time in order to maybe learn a technique that solves their problem. They might also be able to figure out if they should try some other trick they heard about first.
But still, this trick might be harmful to some people. If someone instead will just eat cookies whenever they are available, without any regard to future availability, this will cause that person to eat a lot of cookies.
And if you cross the street, you might get hit by a car. This sort of reasoning goes on and on but never gets you anywhere. If you don’t want to do something, you can always find a reason.
Sure, that doesn’t mean that ANY reason is valid, or that EVERY objection is invalid. However, that too is another step in a chain of reasoning that never ends, and never results in taking action. Instead, you will simply wait and wait and wait for someone else to validate your taking action. So if you don’t take status quo bias into consideration, then you have no protection against your own confabulation, because the only way to break the confabulation deadlock is to actually DO something.
Even if you don’t know what the hell you’re doing and try things randomly, you’ll improve as long as there’s some empirical way to measure your results, and the costs of your inevitable mistakes and failures are acceptable. Hell, look at how far evolution got, doing basically that. An intelligent human being can certainly do better… but ONLY by doing something besides thinking.
After all, the platform human thinking runs on is not reliable. In principle, LWers and I agree on this. But in practice, LWers argue as if their brains were reliable reasoning machines, instead of arguing machines!
I learned the hard way that my brain’s confabulation—“reasoning”—is not reliable when it is not subjected to empirical testing. It works well for reducing evidence to pattern and playing with explanatory models, but it’s lousy at coming up with ideas in the first place, and even worse at noticing whether those ideas are any good for anything but sounding convincing.
One of my pet sayings is that “amateurs guess, professionals test”. But “test” in the instrumental world does not mean a review of statistics from double-blind experiments. It means actually testing the thing you are trying to build or fix, in as close to the actual use environment as practical. If my mechanic said statistics show it’s 27% likely that the problem with my car is in the spark plugs, but didn’t actually test them, I’d best get another mechanic!
The best that statistics can do for the mechanic is to mildly optimize what tests should be done first… but you could get almost as much optimization by testing in easiest-first order.
It might help to have some studies that say some percentage of people are the sort this technique helps, and some other percentage are the sort that are harmed, or better yet, identify some observable characteristics that predicts how a person would be affected.
And where, pray tell is this information going to come from if nobody tries anything? If the Extreme Rationalist position is only to try things that have been validated on other people first, what happens when everybody is an extreme rationalist? Doesn’t sound very scalable to me.
And if you cross the street, you might get hit by a car. This sort of reasoning goes on and on but never gets you anywhere. If you don’t want to do something, you can always find a reason.
The fact that one might get hit by a car when crossing the street does not prevent people from crossing the street; it causes them to actually look to see if cars are coming before crossing the street. Did you miss the part where I talked about how the evidence might convince someone the risk is acceptable in their case? Or how it might help them to compare it to another trick that is less risky or more likely to work for them that they could try first?
And where, pray tell is this information going to come from if nobody tries anything? If the Extreme Rationalist position is only to try things that have been validated on other people first, what happens when everybody is an extreme rationalist? Doesn’t sound very scalable to me.
Getting volunteer subjects for a study is different than announcing a trick on the internet and expecting people to try it.
Where do you think the data is going to come from if people just try it on their own? How are you going to realize if you have suggested a trick that doesn’t work, or that only works for some people, if you accept all anecdotal success stories as confirming its effectiveness, but reject all reports of failure because people just make excuses?
How are you going to realize if you have suggested a trick that doesn’t work, or that only works for some people, if you accept all anecdotal success stories as confirming its effectiveness, but reject all reports of failure because people just make excuses?
I can only assume you’re implying that that’s what I do. But as I’ve already stated, when someone has performed a technique to my satisfaction, and it still doesn’t work, I have them try something else. I don’t just say, “oh well, tough luck, and it’s your fault”.
There are only a few possibilities regarding an explanation of why “different things work for different people”:
Some things only work on some people, and this is an unchanging trait attributable to the people themselves,
Some things only work on certain kinds of problems, and many problems superficially sound similar but are actually different in their mechanism of operation (so that technique A works on problem A1 but not A2, and the testers/experimenters have not yet discerned the difference between A1 and A2), and
Some people have an easier time of learning how to do some things than others, depending in part on how the thing is explained, and what prior beliefs, understandings, etc. they have. (So that even though a test of technique A is being performed, in practice one is testing an unknown set of variant techniques A1, A2,...)
On LW, #1 is a popular explanation, but I have seen much more evidence that makes sense for #2 and #3. (For example, not being able to apply a technique and then later learning it supports #3, and discovering a criterion that predicts which of two techniques will be more likely to work for a given problem supports #2.)
Of course, I cannot 100% rule out the possibility that #1 could be true, but it seems like pretty long odds to me. There are so many clear-cut cases of #2 and #3, that barring actual brain damage or defect, #1 seems like adding unnecessary entities to one’s model, without any theoretical or empirical justification whatsoever.
More than that, it sounds exactly like attribution error, and an instance of Dweck’s “fixed” mindset as well. In other words, we can expect belief in #1 to be associated with a mindset that is highly correlated with consistent difficulty and stress in the corresponding field.
That’s why I consider view #1 to be bad instrumental hygiene as well as not that likely to be true anyway. It’s a horrible negative self-prime to saddle yourself with.
Actually, it might make more sense to try to figure out why it works some times and not others, which can even happen in the same person… like, uh, me. If I’m careful about what I keep in the house, I wind up gorging on anything tasty almost as soon as I get it, and buying less healthy ‘treats’ more often (‘just this once’, repeatedly) when I go out. If I keep goodies at home, I’ll ignore them for a while, but then decide something along the lines of “it’d be a shame to let this go to waste” and eat them anyway.
There are different mental states involved in each of those situations, but I don’t know what triggers the switch from one to another.
You mean status quo bias, like the argument against the Many-Worlds interpretation?
I mean the argument being too weak to change one’s mind about a decision. It communicates the info, changes the level of certainty (a bit), but it doesn’t flip the switch.
Perhaps there are intuitive notions of “less wrong” that are different from “more right”, but in a technical sense, they seem to be the same:
Accounting for the uncertainty in your own mind only gets you so far, to a certain minimum of wrongness. To do better, to be less wrong, you have to actually be right about the rest of the universe outside your mind.
True but irrelevant; this is psychology, not probability theory. Intuitively, to a first approximation, beliefs are either affirmed or not, and there’s a difference between affirming fewer false beliefs and more true ones.
The fact that psychology can explain how the phrase “less wrong” can be misunderstood does not mean that the misunderstanding is the correct way to interpret that phrase when used by an online community that uses psychology, as well as probability theory, to inform the development of rationality. It does not make sense to interpret the title of our site with the very naivety that we seek to overcome.
That’s what I’ve been saying, actually. Except that the naivety in question is the belief that brains do probability or utility, when it’s well established that humans can have both utility and disutility, that they’re not the same thing, and that human behavior about them is different. You know, all that loss/win framing stuff?
It’s not rational to expect human beings to treat “less wrong” as meaning the same thing (in behavioral terms) as “more right”. Avoiding wrongness has different emotional affect and different prioritization of behavior and thought than approaching rightness. Think “avoiding a predator” versus “hunting for food”.
The idea that we can simultaneously have approach and avoidance behaviors and they’re differently-motivating is backed by a (yes, peer-reviewed) concept called affective asynchrony. Strong negative or strong positive emotions can switch off the other system, but for the most part, they operate independently. And mistake-avoidance motivation reduces creativity, independence, risk-taking, etc.
Heck, I’d be willing to bet some actual cash money that a controlled experiment would show significant behavioral differences between people primed with the terms “less wrong” and “more right”, no matter how “rational” they rate themselves to be.
You bet: there’s the one where you can be “less wrong” by never believing anything, because there are more possible false beliefs than true ones. You have now achieved perfect less-wrongness, at the cost of never having any more-rightness.
You missed the point. The intuitive meaning of “less wrong” you describe is a caricature of the ideal of this community.
If by “never believing anything”, you mean “don’t assign any probability to any event”, well then we give a person who does that a score of negative infinity, as wrong as it gets.
If you mean they evenly distribute the probability mass amongst all possibilities, that is what we consider maximum entropy, a standard so low that anything worse might be considered “reversed intelligence”. As Eliezer explained in the source I quoted earlier, we want to do better than freely confessed ignorance. We really do want actual intelligence, not just the absence of its opposite.
It’s not a caricature of the actual behavior of many of its members.… which notably does not live up to that ideal.
No, I mean choosing to never consider the possibility that you might be utterly, horribly wrong in your certainty about what things are true… especially with respect to the things we would prefer to believe are true about ourselves and others.
A segment of LW culture applauds the detection and management of superficial biases while being ludicrously blind to the massive bias of the very framework it operates in: the one where truth and reason must prevail at all costs, and where the idea of believing something false—even for a moment, even in a higher cause, is unthinkable.
Is that a caricature of the Bayesian ideal? No kidding. But I’m not the one who’s drawing it.
What I’m specifically referring to here is the brigade whose favorite argument is that something or other isn’t yet proven “true”, and that they should therefore not try it… especially if they spend more time writing about why they shouldn’t try something, than it would take them to try it.
Heck, not just why they shouldn’t try something, but why noone should ever try anything that isn’t proven. Why, thinking a new thought might be dangerous!
And yes, someone actually argued that, in the context of a thread talking about purely-mental experiments that basically amounted to thinking. (Sure, they left themselves weasel room to argue that they weren’t saying thoughts were dangerous, and yet they still used it as a fully general argument, applied to the specific case of experimenting with a thought process.)
What’s that saying about how, if given a choice between changing their mind and trying to prove they don’t need to, most people get busy on the proof?
So, “never believing anything” means having unwavering certainty?
Without knowing what “brigade” or techniques you are referring to, I have to wonder if the people involved were not looking for some absolute proof, but for evidence that a given technique is more likely to be helpful than harmful. Or maybe some of them had tried a bunch of techniques presented with similar claims, with no success, and decided they want to have a good reason for trying out a particular technique instead of the many similar techniques one might propose. They might even think that, if they knew the reasons that someone was proposing them, they might understand the technique better and use it more effectively, or even see that the reasoning was not quite right, but if they fix it, it suggests that a similar technique might work. They might not actually have the particular problem the technique is supposed to solve, and are seeking evidence about if it works for people who do have the problem.
Good point, but a priori I wouldn’t expect a self-help technique to be harmful in a way that’s either hard to notice or hard to reverse. Can you think of some that are, especially ones where it would be hard to predict the harm beforehand from evidence specific to the technique?
Not wanting to invest time and effort can be a good reason (then again, if you have time to argue at length in comment threads...), but the existence of similar techniques shouldn’t matter. A greater number of options has been shown to lead to less willingness to choose anything (e.g.); beware. (FWIW, I suspect this has to do with a general heuristic to do the most defensible thing instead of the best thing.)
Strongly agreed. Generally, though, I agree with pjeby’s conclusion (tentatively, but only because so many others here disagree).
So, you want an example a technique that I can argue is harmful, but it is difficult to predict that harm? You want a known unknown unknown? I don’t think I can provide that. But if you look at my assessment of the the keeping cookies available trick, I explain how there is some possibility of harm and what kinds of evidence one might use to evaluate if the risk is worth the potential benifet.
Suppose you have 10 tricks that you might try to solve a particular problem, and that it might take a day to try one trick and evaluate if it worked for you. Would it be a good idea to spend some time to figure out if one of the tricks stands out from the other as more likely to work, either generally or for you in particular? Being able to systematically try the trick that is well supported and understood has some value. Or, if in discussing one aspect of the trick that you think would never work, and it turns out you were right, what you understood would not work, and the actually trick is something different, you have not just saved a lot of time, you have prevented yourself from losing the opportunity to try the real trick.
No, an example of a technique that is harmful, but whose harm would have been difficult for a reasonable person to predict in advance. The potential downside of the cookie trick is easy to notice and easy to reverse (well, I guess you can’t easily reverse gaining epsilon weight, but you can limit it to epsilon), so as a reason not to try it’s very weak.
I take my point back. If you can only try one thing, it makes sense to just act if there is only one option, but to demand a good reason before wasting your chance if there are multiple options. (Formally, this is because the opportunity cost of failure is greater in the latter case.) Realistically, “willpower to engage in psychological modification” seems like it would often be a limiting factor producing this effect; still, I would expect irrational choice avoidance to be a factor in many cases of people demanding a reason to favor one option.
My point is that this is argument is fully general. What is the evidence that empirical rationality is more likely to be helpful than harmful? If we look at the evidence presented here, where more evidence is presented more often that being too rational can hurt your happiness and effectiveness, why is this not treated as a reason to wait for more study of this whole “rationality” business to make sure it’s more helpful than harmful?
It’s really ironic that optimism is as much a mind-killer here as politics and religion. Hell, the fact that religion can be shown to have empirically positive effects on people’s lives is often viewed here as a depressing problem, rather than an opportunity to learn something about how brains work. The problem of understanding the god-shaped hole is something people talk a lot about, but very few people are actually doing anything about it.
Well, we have reliable agriculture, reasonably effective transportation infrastructure, flight, telecommunication, powerful computers with lots of useful software because lots of people worked hard to believe things that are true, and used those true beliefs to figure out how to accomplish their goals.
Asking for evidence is not a fully general counterargument. It distinguishes arguments that have evidence in their favor from those which don’t.
And keep in mind, the social process of science, which you seem to think holds you to too high of a standard, is not merely interested in your ideas being right, but that you have effectively communicated them to the extent that other people can verify that they are right. You might discover the greatest anti-akrasia trick ever, but if you can’t explain it so that other people can use it and have it work even if you are not there to guide the process, then you have only helped yourself and your clients, and talking about it here is not helping. Of course, you could take the opportunity to figure out how to explain it better, though it would require you to “consider the possibility that you might be utterly, horribly wrong in your certainty about what things are true”.
Two things I forgot in my other reply: first, testing on yourself is a higher standard than peer review, if your purpose is to find something that works for you.
Second, if this actually were about “my” ideas (and it isn’t), I’ve certainly effectively communicated many of them to the extent of verifiability, since many people have reported here and elsewhere about their experiments with them.
But very few of “my” ideas are new in any event—I have a few new approaches to presentation or learning, sure, maybe some new connections between fields (ev psych + priming + somatic markers + memory-prediction framework + memory reconsolidation, etc.), and a relatively-new emphasis on real-time, personal empirical testing. (I say relatively new because Bandler was advocating extreme testing of this sort 20+ years ago, but for some reason it never caught on in the field at large.)
And I’m not aware that any of these ideas is particularly controversial in the scientific community. Nobody ’s pushing for more individual empirical testing per se, but the “brief therapy” movement that resulted in things like CBT is certainly more focused that direction than before.
(The reason I stopped even bothering to write about any of that, though, is simply that I ended up in some sort of weird loop where people insist on references, and then ignore the ones I supply, even when they’re online papers or Wikipedia. Is it any wonder that I would then conclude they didn’t really want the references?)
Those are the products of rationalism. I’m asking about evidence that the practice of (extreme) rationalism produces positive effects in the lives of the people who practice it, not the benefits that other people get from having a minority of humans practice it.
It is if you also apply the status quo bias to choose which evidence to count.
I really wish people wouldn’t conflate the discussion of learning and attitude in general with the issue of specific techniques. There is plenty of evidence for how attitudes (of both student and teacher) affect learning, yet somehow the subject remains quite controversial here.
(Edited to say “extreme rationalism”, as suggested by Nick Tarleton.)
You should probably be asking about extreme rationality.
Evidence is demanded for communicating the change in preferred decision.
If I like eating cookies, and so choose to eat cookies, it takes at least a deliberative thought to change my mind. I may have all the data, but changing a decision requires considering it. I may realize that I’m getting overweight, and that most of my calories come from cookies, so I change my mind and start preferring the decision of not eating cookies.
If a guy on the forum says, to my disbelief, that stopping eating the cookies in my particular situation will actually make me even more overweight, I won’t be able to change my mind as a result of hearing his assertion. I consider what it’d take to change my mind, and present him with a constructive request: find a few good studies supporting your claims, and show them to me. That’s what it takes to change my mind, and I can think of no other obvious way for him to convince me to change this decision.
You mean status quo bias, like the argument against the Many-Worlds interpretation?
It’s funny that you mention this, because I actually know of an author that says something just similar enough to that idea that you could maybe confuse what she says as meaning you should eat the cookies.
Specifically, she posits a mechanism which causes some people to eat compulsively when they believe they will not have enough food in the future, regardless of whether they’re hungry now. She actually encourages these people to keep stores of indulgence foods available in all places at all times, in order to produce a feeling of security that negates their compulsion to eat now—in effect, they can literally procrastinate on overeating, because they could now do it “any time”. There’s no particular moment at which they need to eat up because they’re about to be out of reach of food.
I bring this up because, if you heard this theory, and then misinterpreted it as meaning you should eat the cookies, then it would be quite logical for you to be quite skeptical, since it doesn’t match your experience.
However, if you simply observed your past experience of overeating and found a correlation between times when you ate cookies and a pending separation from food (e.g. when being about to go into a long meeting), I would be very disappointed for your rationality if you then chose NOT to try bringing the cookies into the meeting with you, or hiding a stash in the bathroom that you could excuse yourself for a moment to get, or even just focusing on having some right there when you get out of the meeting.
And yes, this metaphor is saying that if you think you need studies to validate things that you can observe first in your own past experience, and then test in your present, then you’ve definitely misunderstood something I’ve said.
(Btw, in case anyone asks, the author is Dr. Martha Beck and the book I’m referring to above is called The Four-Day Win.)
I strongly suspect that this trick wouldn’t work on me—the problem is that I’ve taught my brain to deliberately keep a step ahead of this sort of self-deception. Even if I started out by eating a whole pack of cookies, the second pack, that I was just supposed to keep available and feel the availability of, but not eat, would not feel available. If it was truly genuinely available and it was okay to eat it, I would probably eat it. If not, I couldn’t convince myself it was available.
What I may try is telling myself a true statement when I’m tempted to eat, namely that I actually do have strong food security, and I may try what I interpret as your monoidealism trick, to fill my imagination with thoughts of eating later, to convince myself of this. That might help—if the basic underlying theory of eating to avoid famine is correct. Some of the Seth Roberts paradigm suggests that other parts of our metabolism have programmed us to eat more when food is easily available. We could expect evolution to be less irrational than the taxi driver who quits early on rainy days when there are lots of fares, and works harder and longer when work is harder to come by, in order to make the same minimum every day.
Another thought is that it may be a bad situation for your diet to ever allow yourself to be in food competition with someone else—to ever have two people, at least one of whom is trying to diet, eating from the same bag of snacks in a case where the bag is not immediately refilled on being consumed.
’Tis a pity that such theories will never be tested unless the diet-book industry and its victims/prey/readers, become something other than what it is now; even if I were to post, saying this trick work, it would only be one more anecdote among millions on the Internet.
IIRC, she only advocated this theory for people who were binging in response to anticipated hunger, and not as a general theory of weight loss. It’s only a tiny part of the book as a whole, which also discussed other emotional drivers for eating. Part of her process includes making a log of what you eat, at what time of day, along with what thoughts you were thinking and what emotional and physical responses you were having… along with a reason why the relevant thought might not be true.
I haven’t tried it myself—I actually didn’t buy the book for weight loss, but because I was intrigued by her hypothesis that it only takes four days to implement a habit (not 21 or 30 as traditional self-help authors claim), provided that the habit doesn’t represent any sort of threat to your existing order. For example, most people can easily learn a new route to work or school within four days of moving or changing jobs or schools.
That is, it’s only habits that conflict in some way with an existing way of doing things that are difficult to form, so her proposal is to use extremely small increments, like her own example of driving to the gym every morning for four days… but just sitting in the parking lot and not actually going in.… then going in and sitting on a bike but not exercising… etc. At each stage, four days of it is supposed to be enough to make what you’ve already been doing a non-threatening part of your routine.
I’ve used the approach to implement some small habits, but nothing major as yet. Seems promising so far.
It seems that keeping cookies constantly available so that one never feels they will be unavailable does not involve any sort of self deception. One can honestly tell oneself that they don’t have to eat the cookie now, it will still be there later.
But still, this trick might be harmful to some people. If someone instead will just eat cookies whenever they are available, without any regard to future availability, this will cause that person to eat a lot of cookies.
It might help to have some studies that say some percentage of people are the sort this technique helps, and some other percentage are the sort that are harmed, or better yet, identify some observable characteristics that predicts how a person would be affected. With this information, people can figure out of it makes sense for them to risk some time making their problem worse by having more cookies available for a time in order to maybe learn a technique that solves their problem. They might also be able to figure out if they should try some other trick they heard about first.
And if you cross the street, you might get hit by a car. This sort of reasoning goes on and on but never gets you anywhere. If you don’t want to do something, you can always find a reason.
Sure, that doesn’t mean that ANY reason is valid, or that EVERY objection is invalid. However, that too is another step in a chain of reasoning that never ends, and never results in taking action. Instead, you will simply wait and wait and wait for someone else to validate your taking action. So if you don’t take status quo bias into consideration, then you have no protection against your own confabulation, because the only way to break the confabulation deadlock is to actually DO something.
Even if you don’t know what the hell you’re doing and try things randomly, you’ll improve as long as there’s some empirical way to measure your results, and the costs of your inevitable mistakes and failures are acceptable. Hell, look at how far evolution got, doing basically that. An intelligent human being can certainly do better… but ONLY by doing something besides thinking.
After all, the platform human thinking runs on is not reliable. In principle, LWers and I agree on this. But in practice, LWers argue as if their brains were reliable reasoning machines, instead of arguing machines!
I learned the hard way that my brain’s confabulation—“reasoning”—is not reliable when it is not subjected to empirical testing. It works well for reducing evidence to pattern and playing with explanatory models, but it’s lousy at coming up with ideas in the first place, and even worse at noticing whether those ideas are any good for anything but sounding convincing.
One of my pet sayings is that “amateurs guess, professionals test”. But “test” in the instrumental world does not mean a review of statistics from double-blind experiments. It means actually testing the thing you are trying to build or fix, in as close to the actual use environment as practical. If my mechanic said statistics show it’s 27% likely that the problem with my car is in the spark plugs, but didn’t actually test them, I’d best get another mechanic!
The best that statistics can do for the mechanic is to mildly optimize what tests should be done first… but you could get almost as much optimization by testing in easiest-first order.
And where, pray tell is this information going to come from if nobody tries anything? If the Extreme Rationalist position is only to try things that have been validated on other people first, what happens when everybody is an extreme rationalist? Doesn’t sound very scalable to me.
The fact that one might get hit by a car when crossing the street does not prevent people from crossing the street; it causes them to actually look to see if cars are coming before crossing the street. Did you miss the part where I talked about how the evidence might convince someone the risk is acceptable in their case? Or how it might help them to compare it to another trick that is less risky or more likely to work for them that they could try first?
Getting volunteer subjects for a study is different than announcing a trick on the internet and expecting people to try it.
Where do you think the data is going to come from if people just try it on their own? How are you going to realize if you have suggested a trick that doesn’t work, or that only works for some people, if you accept all anecdotal success stories as confirming its effectiveness, but reject all reports of failure because people just make excuses?
I can only assume you’re implying that that’s what I do. But as I’ve already stated, when someone has performed a technique to my satisfaction, and it still doesn’t work, I have them try something else. I don’t just say, “oh well, tough luck, and it’s your fault”.
There are only a few possibilities regarding an explanation of why “different things work for different people”:
Some things only work on some people, and this is an unchanging trait attributable to the people themselves,
Some things only work on certain kinds of problems, and many problems superficially sound similar but are actually different in their mechanism of operation (so that technique A works on problem A1 but not A2, and the testers/experimenters have not yet discerned the difference between A1 and A2), and
Some people have an easier time of learning how to do some things than others, depending in part on how the thing is explained, and what prior beliefs, understandings, etc. they have. (So that even though a test of technique A is being performed, in practice one is testing an unknown set of variant techniques A1, A2,...)
On LW, #1 is a popular explanation, but I have seen much more evidence that makes sense for #2 and #3. (For example, not being able to apply a technique and then later learning it supports #3, and discovering a criterion that predicts which of two techniques will be more likely to work for a given problem supports #2.)
Of course, I cannot 100% rule out the possibility that #1 could be true, but it seems like pretty long odds to me. There are so many clear-cut cases of #2 and #3, that barring actual brain damage or defect, #1 seems like adding unnecessary entities to one’s model, without any theoretical or empirical justification whatsoever.
More than that, it sounds exactly like attribution error, and an instance of Dweck’s “fixed” mindset as well. In other words, we can expect belief in #1 to be associated with a mindset that is highly correlated with consistent difficulty and stress in the corresponding field.
That’s why I consider view #1 to be bad instrumental hygiene as well as not that likely to be true anyway. It’s a horrible negative self-prime to saddle yourself with.
Actually, it might make more sense to try to figure out why it works some times and not others, which can even happen in the same person… like, uh, me. If I’m careful about what I keep in the house, I wind up gorging on anything tasty almost as soon as I get it, and buying less healthy ‘treats’ more often (‘just this once’, repeatedly) when I go out. If I keep goodies at home, I’ll ignore them for a while, but then decide something along the lines of “it’d be a shame to let this go to waste” and eat them anyway.
There are different mental states involved in each of those situations, but I don’t know what triggers the switch from one to another.
I mean the argument being too weak to change one’s mind about a decision. It communicates the info, changes the level of certainty (a bit), but it doesn’t flip the switch.