I don’t feel that even superficially projecting the desirability of noticing a flaw on the desirability of that flaw is warranted or useful. It’s not OK to be irrational, but it’s a fact of human condition that we are all rather irrational, and many choices and beliefs have a nontrivial chance of being seriously wrong.
Striving to be rational by closing the eyes on your own mistakes is a fallacy of the bottom line; when applied to action, it becomes a confusion of rationalization with planning. This is a pervasive flaw, one to be wary of in all situations. If you are emotionally attached to a wrongly flattering self-image, it is a problem with that image, one to be systematically corrected by expecting more mistakes from yourself, but not at all because the mistakes are acceptable or desirable.
What does that even mean? Reality doesn’t contain any little xml tags on concrete objects, let alone ill-defined abstractions like “irrational”.
Asserting that anything is “OK” or “Not OK” properly belongs to the dark arts of persuasion and motivation, not to the realm of objective reality.
If you are emotionally attached to a wrongly flattering self-image, it is a problem with that image, one to be systematically corrected by expecting more mistakes from yourself, but not at all because the mistakes are acceptable or desirable.
This is an extraordinary claim; the scientific evidence weighs overwhelmingly against you, in that it is clearly more useful to be drawn to live up to an incorrect, flattering future self-image, than to focus on an image of yourself that is currently correct, but unflattering.
What does that even mean? Reality doesn’t contain any little xml tags on concrete objects, let alone ill-defined abstractions like “irrational”.
This looks like a fully general argument against all reason, against characterizing anything with any property, applied to the purpose of attacking (a connotation of?) my judgment.
This is an extraordinary claim; the scientific evidence weighs overwhelmingly against you, in that it is clearly more useful to be drawn to live up to an incorrect, flattering future self-image, than to focus on an image of yourself that is currently correct, but unflattering.
What do you mean by currently correct? Correctness of a statement doesn’t change over time.
I refuse to cherish a flattering self-image that is known to be incorrect. How would it be useful for me to start believing a lie? I’m quite motivated and happy as I am, thank you very much.
This looks like a fully general argument against all reason, against characterizing anything with any property,
...one that can be trivially remediated by rephrasing your original statement in E-prime.
What do you mean by currently correct? Correctness of a statement doesn’t change over time.
I mean that one can be aware that one is currently a sinner, while nonetheless aspiring to be a saint. The people who are most successful in their fields continuously aspire to be better than anyone has ever been before… which is utterly unrealistic, until they actually achieve it. Such falsehoods are more useful to focus on than the truth about one’s past.
The people who are most successful in their fields continuously aspire to be better than anyone has ever been before… which is utterly unrealistic, until they actually achieve it. Such falsehoods are more useful to focus on than the truth about one’s past.
If you win, but you were sure you’ll lose, then you were no less wrong than if you believed that you could succeed where it can’t happen. People are uncertain about their future, and about their ability, but this uncertainty, this limited knowledge is about what can actually happen. If you really can succeed in achieving what was never seen before, your aspirations are genuine. What you know about your past is about your past, and what you make of your future is a different story entirely.
If you win, but you were sure you’ll lose, then you were no less wrong than if you believed that you could succeed where it can’t happen.
Have you ever heard the saying, “If you shoot for the moon and miss… you are still among the stars?” It’s more useful to aim high and fail, than to aim low by being realistic.
I hate to harp on about (time and) relative distances in space but if you shoot for the Moon and miss, you are barely any closer to the stars than you were when you started.
More seriously, you don’t seem to be answering Vladimir_Nesov’s point at all, which is that if you think that such optimism can result in winning, then the optimism isn’t irrational in the first place, and it was the initial belief of impossibility that was mistaken.
you don’t seem to be answering Vladimir_Nesov’s point at all, which is that if you think that such optimism can result in winning, then the optimism isn’t irrational in the first place, and it was the initial belief of impossibility that was mistaken.
Was that really his point? If so, I missed it completely; probably because that position appears to directly contradict what he said in his previous comment.
More precisely, he appeared to be arguing that making wrong predictions (in the sense of assigning incorrect probabilities) is “not OK”.
However, in order to get the benefit of “shooting for the moon”, you have to actually be unrealistic, at the level of your brain’s action planning system, even if intellectually you assign a different set of probabilities. (Which may be why top performers are often paradoxically humble at the same time as they act as if they can achieve the impossible.)
you don’t seem to be answering Vladimir_Nesov’s point at all, which is that if you think that such optimism can result in winning, then the optimism isn’t irrational in the first place, and it was the initial belief of impossibility that was mistaken.
Was that really his point?
Yes, that really was one of the things I argued in my recent comments.
However, in order to get the benefit of “shooting for the moon”, you have to actually be unrealistic, at the level of your brain’s action planning system, even if intellectually you assign a different set of probabilities.
Are you arguing for the absolute necessity of doublethink? Is it now impossible to get to the high levels of achievement without doublethink?
Yes, that really was one of the things I argued in my recent comments.
Well, it seems a little tautological to me: only in hindsight can you be sure that your optimism was rational. At the time of your initial optimism, it may be “irrational” from a strictly mathematical perspective, even after taking into account the positive effects of optimism. Note, for example, the high rate of startup failure; if anybody really believed the odds applied to them, nobody would ever start one.
Are you arguing for the absolute necessity of doublethink?
I am not claiming that success requires “doublethink”, in the sense of believing contradictory things. I’m only saying that an emotional belief in success is relevant to your success. What you think of the matter intellectually is of relatively little account, just as one’s intellectual disbelief in ghosts has relatively little to do with whether you’ll be able to sleep soundly in a “haunted” house.
The main drivers of our actions are found in the “near” system’s sensory models, not the “far” system’s abstract models. However, if the “near” system is modelling failure, it is difficult for the “far” system to believe in success.… which leads to people having trouble “believing” in success, because they’re trying to convince the far mind instead of the near one. Or, they succeed in wrapping the far system in double-think, while ignoring the “triple think” of the near system still predicting failure.
In short, the far system and your intellectual thoughts don’t matter very much. Action is not abstraction.
See also: Striving to Accept.
If you have to strive to believe something—with either the near OR far system—you’re doing it wrong. The near system in particular is ridiculously easy to change beliefs in; all you have to do is surface all of the relevant existing beliefs first.
Striving, on the other hand, is an indication that you have conflicting beliefs in play, and need to remove one or more existing ones before trying to install a new one.
(Note: I’m not an epistemic rationalist, I’m an instrumental one. Indeed, I don’t believe that any non-trivial absolute truths are any more knowable than Godel and Heisenberg have shown us they are in other sorts of systems. I therefore don’t care which models or beliefs are true, only which ones are useful. To the extent that you care about the “truth” of a model, you will find conversing with me frustrating, or at least uninformative.)
Well, it seems a little tautological to me: only in hindsight can you be sure that your optimism was rational.
Wrong. When you act under uncertainty, the outcome is not the judge of the propriety of your reason, although it may point out a probable problem.
What you think of the matter intellectually is of relatively little account, just as one’s intellectual disbelief in ghosts has relatively little to do with whether you’ll be able to sleep soundly in a “haunted” house.
I understand that the connection isn’t direct, and in some cases may be hard to establish at all, but you are always better off bringing all sides of yourself to agreement.
I therefore don’t care which models or beliefs are true, only which ones are useful. To the extent that you care about the “truth” of a model, you will find conversing with me frustrating, or at least uninformative.
Yet you can’t help but care which claims about models being useful are true.
I understand that the connection isn’t direct, and in some cases may be hard to establish at all, but you are always better off bringing all sides of yourself to agreement.
Perhaps. My point was that your intellectual conclusion doesn’t have much direct impact on your behavior, so the emotional belief has more practical relevance.
Yet you can’t help but care which claims about models being useful are true.
No, I care which ones are useful to me, which is only incidentally correlated with which claims about the models are true.
Yet you can’t help but care which claims about models being useful are true.
No, I care which ones are useful to me, which is only incidentally correlated with which claims about the models are true.
You misunderstood Vladimir Nesov. His point was that “which model is (really, truly) useful” is itself a truth claim. You care which models are in fact useful to you—and that means that on a meta-level, you are concerned with true predictions (specifically, with true predictions as to which instrumental models will or won’t be useful to you).
It’s an awkward claim to word; I’m not sure if my rephrases helped.
His point was that “which model is (really, truly) useful” is itself a truth claim. You care which models are in fact useful to you—and that means that on a meta-level, you are concerned with true predictions (specifically, with true predictions as to which instrumental models will or won’t be useful to you).
That may be true, but I don’t see how it’s useful. ;-)
Actually, I don’t even see that it’s always true. I only need accurate predictions of which models will be useful when the cost of testing them is high compared to their expected utility. If the cost of testing is low, I’m better off testing them myself, than worrying about whether they’re in fact going to be useful.
In fact, excessive pre-prediction of what models are likely to be useful is probably a bad idea; I could’ve made more progress in improving myself, a lot sooner, if I hadn’t been so quick to assume that I could predict the usefulness of a method without having first experienced it.
By way of historical example, Benjamin Franklin concluded that hypnosis was nonsense because Mesmer’s (incorrect) model of how it worked was nonsense… and so he passed up the opportunity to learn something useful.
More recently, I’ve tried to learn from his example by ignoring the often-nonsensical models that people put forth for their methods, focusing instead on whether the method itself produces the claimed results, when approached with an open mind.
Then, if possible, I try to construct a simpler, saner, more rigorous model for the method—though still without any claim of absolute truth.
Less-wrongness is often useful; rejecting apparent wrongness, much less so.
Note, for example, the high rate of startup failure; if anybody really believed the odds applied to them, nobody would ever start one.
AFAIK, “really believe” is used to mean both “emotionally accept” and “have as a deliberative anticipation-controller”. I take it you mean the first, but given the ambiguity, we should probably not use the term. Just a suggestion.
AFAIK, “really believe” is used to mean both “emotionally accept” and “have as a deliberative anticipation-controller”. I take it you mean the first, but given the ambiguity, we should probably not use the term. Just a suggestion.
Here’s the thing: intellectual beliefs aren’t always anticipation controllers. Sometimes, they’re just abstract information marked as “correct”—applause lights or teacher’s passwords.
So, by “really believe”, I mean, what your automatic machinery will use to make the predictions that will actually drive your actions.
This also connects with “emotionally accept”—but isn’t precisely the same thing. You can emotionally accept something without actually expecting it to happen… and it’s this autonomous “expectation” machinery that I’m referring to. i.e., the same sort of thing that makes your brain “expect” that running away from the haunted house is a good idea.
These sorts of expectations and predictions are always running, driving your current behavior. However, conscious anticipation (by definition) is something you have to do on purpose, and therefore has negligible impact on your real-time behaviors.
Not sure I get the distinction you’re drawing. Supposing you say you know you won’t win, but then you buy a lottery ticket anyway. Is that a failure of emotional acceptance of the number representing your odds, or a failure of anticipation control?
If you were akratically compelled to buy the ticket, failure of emotional acceptance. Failure of anticipation control at a deliberative level is the kind of thing that produces statements about invisible dragons. It’s hard to think of a plausible way that could happen in this situation – maybe Escher-brained statements like “it won’t win, but it still might”?
I don’t feel that even superficially projecting the desirability of noticing a flaw on the desirability of that flaw is warranted or useful. It’s not OK to be irrational, but it’s a fact of human condition that we are all rather irrational, and many choices and beliefs have a nontrivial chance of being seriously wrong.
Striving to be rational by closing the eyes on your own mistakes is a fallacy of the bottom line; when applied to action, it becomes a confusion of rationalization with planning. This is a pervasive flaw, one to be wary of in all situations. If you are emotionally attached to a wrongly flattering self-image, it is a problem with that image, one to be systematically corrected by expecting more mistakes from yourself, but not at all because the mistakes are acceptable or desirable.
What does that even mean? Reality doesn’t contain any little xml tags on concrete objects, let alone ill-defined abstractions like “irrational”.
Asserting that anything is “OK” or “Not OK” properly belongs to the dark arts of persuasion and motivation, not to the realm of objective reality.
This is an extraordinary claim; the scientific evidence weighs overwhelmingly against you, in that it is clearly more useful to be drawn to live up to an incorrect, flattering future self-image, than to focus on an image of yourself that is currently correct, but unflattering.
This looks like a fully general argument against all reason, against characterizing anything with any property, applied to the purpose of attacking (a connotation of?) my judgment.
What do you mean by currently correct? Correctness of a statement doesn’t change over time.
I refuse to cherish a flattering self-image that is known to be incorrect. How would it be useful for me to start believing a lie? I’m quite motivated and happy as I am, thank you very much.
...one that can be trivially remediated by rephrasing your original statement in E-prime.
I mean that one can be aware that one is currently a sinner, while nonetheless aspiring to be a saint. The people who are most successful in their fields continuously aspire to be better than anyone has ever been before… which is utterly unrealistic, until they actually achieve it. Such falsehoods are more useful to focus on than the truth about one’s past.
If you win, but you were sure you’ll lose, then you were no less wrong than if you believed that you could succeed where it can’t happen. People are uncertain about their future, and about their ability, but this uncertainty, this limited knowledge is about what can actually happen. If you really can succeed in achieving what was never seen before, your aspirations are genuine. What you know about your past is about your past, and what you make of your future is a different story entirely.
Have you ever heard the saying, “If you shoot for the moon and miss… you are still among the stars?” It’s more useful to aim high and fail, than to aim low by being realistic.
You are repeating yourself without introducing new arguments.
I hate to harp on about (time and) relative distances in space but if you shoot for the Moon and miss, you are barely any closer to the stars than you were when you started.
More seriously, you don’t seem to be answering Vladimir_Nesov’s point at all, which is that if you think that such optimism can result in winning, then the optimism isn’t irrational in the first place, and it was the initial belief of impossibility that was mistaken.
Was that really his point? If so, I missed it completely; probably because that position appears to directly contradict what he said in his previous comment.
More precisely, he appeared to be arguing that making wrong predictions (in the sense of assigning incorrect probabilities) is “not OK”.
However, in order to get the benefit of “shooting for the moon”, you have to actually be unrealistic, at the level of your brain’s action planning system, even if intellectually you assign a different set of probabilities. (Which may be why top performers are often paradoxically humble at the same time as they act as if they can achieve the impossible.)
Yes, that really was one of the things I argued in my recent comments.
Are you arguing for the absolute necessity of doublethink? Is it now impossible to get to the high levels of achievement without doublethink?
See also: Striving to Accept.
Well, it seems a little tautological to me: only in hindsight can you be sure that your optimism was rational. At the time of your initial optimism, it may be “irrational” from a strictly mathematical perspective, even after taking into account the positive effects of optimism. Note, for example, the high rate of startup failure; if anybody really believed the odds applied to them, nobody would ever start one.
I am not claiming that success requires “doublethink”, in the sense of believing contradictory things. I’m only saying that an emotional belief in success is relevant to your success. What you think of the matter intellectually is of relatively little account, just as one’s intellectual disbelief in ghosts has relatively little to do with whether you’ll be able to sleep soundly in a “haunted” house.
The main drivers of our actions are found in the “near” system’s sensory models, not the “far” system’s abstract models. However, if the “near” system is modelling failure, it is difficult for the “far” system to believe in success.… which leads to people having trouble “believing” in success, because they’re trying to convince the far mind instead of the near one. Or, they succeed in wrapping the far system in double-think, while ignoring the “triple think” of the near system still predicting failure.
In short, the far system and your intellectual thoughts don’t matter very much. Action is not abstraction.
If you have to strive to believe something—with either the near OR far system—you’re doing it wrong. The near system in particular is ridiculously easy to change beliefs in; all you have to do is surface all of the relevant existing beliefs first.
Striving, on the other hand, is an indication that you have conflicting beliefs in play, and need to remove one or more existing ones before trying to install a new one.
(Note: I’m not an epistemic rationalist, I’m an instrumental one. Indeed, I don’t believe that any non-trivial absolute truths are any more knowable than Godel and Heisenberg have shown us they are in other sorts of systems. I therefore don’t care which models or beliefs are true, only which ones are useful. To the extent that you care about the “truth” of a model, you will find conversing with me frustrating, or at least uninformative.)
Sigh.
Wrong. When you act under uncertainty, the outcome is not the judge of the propriety of your reason, although it may point out a probable problem.
I understand that the connection isn’t direct, and in some cases may be hard to establish at all, but you are always better off bringing all sides of yourself to agreement.
Yet you can’t help but care which claims about models being useful are true.
Perhaps. My point was that your intellectual conclusion doesn’t have much direct impact on your behavior, so the emotional belief has more practical relevance.
No, I care which ones are useful to me, which is only incidentally correlated with which claims about the models are true.
You misunderstood Vladimir Nesov. His point was that “which model is (really, truly) useful” is itself a truth claim. You care which models are in fact useful to you—and that means that on a meta-level, you are concerned with true predictions (specifically, with true predictions as to which instrumental models will or won’t be useful to you).
It’s an awkward claim to word; I’m not sure if my rephrases helped.
That may be true, but I don’t see how it’s useful. ;-)
Actually, I don’t even see that it’s always true. I only need accurate predictions of which models will be useful when the cost of testing them is high compared to their expected utility. If the cost of testing is low, I’m better off testing them myself, than worrying about whether they’re in fact going to be useful.
In fact, excessive pre-prediction of what models are likely to be useful is probably a bad idea; I could’ve made more progress in improving myself, a lot sooner, if I hadn’t been so quick to assume that I could predict the usefulness of a method without having first experienced it.
By way of historical example, Benjamin Franklin concluded that hypnosis was nonsense because Mesmer’s (incorrect) model of how it worked was nonsense… and so he passed up the opportunity to learn something useful.
More recently, I’ve tried to learn from his example by ignoring the often-nonsensical models that people put forth for their methods, focusing instead on whether the method itself produces the claimed results, when approached with an open mind.
Then, if possible, I try to construct a simpler, saner, more rigorous model for the method—though still without any claim of absolute truth.
Less-wrongness is often useful; rejecting apparent wrongness, much less so.
AFAIK, “really believe” is used to mean both “emotionally accept” and “have as a deliberative anticipation-controller”. I take it you mean the first, but given the ambiguity, we should probably not use the term. Just a suggestion.
Off-topic: See The So-Called Heisenberg Uncertainty Principle.
Here’s the thing: intellectual beliefs aren’t always anticipation controllers. Sometimes, they’re just abstract information marked as “correct”—applause lights or teacher’s passwords.
So, by “really believe”, I mean, what your automatic machinery will use to make the predictions that will actually drive your actions.
This also connects with “emotionally accept”—but isn’t precisely the same thing. You can emotionally accept something without actually expecting it to happen… and it’s this autonomous “expectation” machinery that I’m referring to. i.e., the same sort of thing that makes your brain “expect” that running away from the haunted house is a good idea.
These sorts of expectations and predictions are always running, driving your current behavior. However, conscious anticipation (by definition) is something you have to do on purpose, and therefore has negligible impact on your real-time behaviors.
Not sure I get the distinction you’re drawing. Supposing you say you know you won’t win, but then you buy a lottery ticket anyway. Is that a failure of emotional acceptance of the number representing your odds, or a failure of anticipation control?
If you were akratically compelled to buy the ticket, failure of emotional acceptance. Failure of anticipation control at a deliberative level is the kind of thing that produces statements about invisible dragons. It’s hard to think of a plausible way that could happen in this situation – maybe Escher-brained statements like “it won’t win, but it still might”?