That’s a terrible quote. Being wrong is the best possible outcome of an argument, as it’s the one with the highest expected knowledge gain (unless you’re a hardcore altruist who doesn’t value their own knowledge differently from anyone else’s).
I agree with you, but I don’t think that makes it a terrible quote. I personally don’t seem to be psychologically able to avoid that awful sinking feeling when I realise I’m wrong, and it does suck. But recognising that it sucks is an important part of allowing the sinking feeling to wash over you, not be personally offended by it, and realise that if you update on this piece of wrongness, you’re slightly less likely to be wrong again next time. For me at least, if I just try to pretend the sinking feeling isn’t happening, because “rationally” it shouldn’t, it just means I’m pretending the wrongness itself isn’t happening. And that’s a bad idea.
Emotions help anchor new knowledge. That horrible sinking feeling helps you to remember your screw up, so you don’t do it again. I suspect people that keep making the same mistakes are those who try to hide their mistakes from themselves and avoid that feeling.
OTOH, you could just compare the feeling of being proven wrong to the feeling of being constipated with hemorrhoids and not being able to urinate, and reckon that you’re getting off pretty easy.
Being wrong is the best possible outcome of an argument, as it’s the one with the highest expected knowledge gain
No, that’s backwards. Learning that you are wrong is good if and only if you are wrong. But it’s only good because you were already wrong, which was bad—you were making bad decisions before. It’s like saying that it is better to win the lottery than to be born rich. Roughly speaking, it doesn’t matter when or where the money or knowledge comes from, only that you can use it.
So if you are surprised to find a $20 bill in your couch, your disappointment at having lost $20 some time in the past is equal to your pleasure at now having $20 more than you did a moment ago?
My current level of ignorance is a fact of life, I already know that there must be things that I’m wrong about. How is finding out something in particular that I am wrong about anything but a positive outcome?
So if you are surprised to find a $20 bill in your couch, your disappointment at having lost $20 some time in the past is equal to your pleasure at now having $20 more than you did a moment ago?
That depends rather a lot on my dopamine levels and thought patterns. I gain much more pleasure from finding cash than I am disappointed at losing it. Hang on… Excuse me. Going for a walk around my house with my wallet open.
So if you are surprised to find a $20 bill in your couch, your disappointment at having lost $20 some time in the past is equal to your pleasure at now having $20 more than you did a moment ago?
roughly, yes.
My current level of ignorance is a fact of life, I already know that there must be things that I’m wrong about. How is finding out something in particular that I am wrong about anything but a positive outcome?
If your mistakes are independent, then correcting one of them doesn’t (much) correct your estimate of how many more mistakes you have to correct. Say you have 21 beliefs with 95% confidence and an argument clarifies a random one of them. You still have 1 expected wrong belief. By independence, we might as well say it’s belief #1 that gets clarified. People who were wrong about it end up the same as people who were right about it. Yes, they gained more information, but they were really just unlucky to start with less information. This is exactly the lottery/inheritance model.
Yes, your ignorance is a fact, but it’s not a fact accessible to you. The argument decreases your estimate of your ignorance by the same amount, regardless of whether you win or lose. If you happen to know how ignorant you are, how many items you’re wrong about, then the situation is different, but that’s a lot less realistic than independence.
So if I understand the point you’re making: Losing an argument provides enough evidence of your prior ignorance to prevent any net gain in your expectation of your own overall knowledgeability, at least relative to winning the argument.
I don’t disagree, but I don’t know why I’d care to base an emotional response on this kind of evaluation. I’m not fretting over my absolute position on the axis of knowledge, I’m just hill climbing. It’s the first derivative that my decisions affect, not the initial constant.
It isn’t that winning the lottery is better than being born rich, it’s that winning the lottery is better than not winning the lottery. Even if you’re already rich, winning the lottery is good. Presumably you weren’t born right about everything, which means it’s more useful to lose arguments than win them. After all, if you never lose an argument, what’s more likely: that you are right about everything, that you’re the best arguer ever, or that you simply don’t argue things you’re wrong about?
After all, if you never lose an argument, what’s more likely: that you are right about everything, that you’re the best arguer ever, or that you simply don’t argue things you’re wrong about?
My first thought was b). What was the intended response?
It’s more so a terrible quote because it is unwise to have a significant emotional attitude towards finding out you’re wrong, because this will tend to reinforce irrational defense mechanisms (“Let’s agree to disagree!”). The purpose of argument is, I hope, to improve your understanding of the world, so even if you shouldn’t be thrilled to find yourself wrong, you shouldn’t be afraid of doing so.
It’s more so a terrible quote because it is unwise to have a significant emotional attitude towards finding out you’re wrong, because this will tend to reinforce irrational defense mechanisms
Yes, it is unwise to have such emotional attitudes, but you don’t get rid of them by saying that they are bad. Honestly acknowledging their existence, as in the original quote, is probably a better route to their elimination than an emotionless assertion that losing arguments is good. The quote, on its own, probably doesn’t do much good, and perhaps does some harm, but I think it is probably a better step to accomplishing loqi’s goal than his phrasing.
Honestly acknowledging their existence, as in the original quote, is probably a better route to their elimination than an emotionless assertion that losing arguments is good.
Possibly, but I certainly wasn’t advocating an emotionless response. Fight fire with fire! If you realize you’re feeling stupid for having been wrong instead of feeling excited to have learned something, go ahead and feel stupid for feeling stupid.
I think I understand the rationale behind the original quote: Being wrong feels awful, so you should try to be right as often as possible. But this emotional response also disincentivizes attempts to stick your neck out on behalf of your existing beliefs.
One might counter that a positive emotional response to being wrong provides an incentive for being wrong in the first place just so you can feel good about discarding your flawed beliefs in the future. This strikes me as a far less plausible mechanism than the above.
I think I understand the rationale behind the original quote: Being wrong feels awful, so you should try to be right as often as possible. But this emotional response also disincentivizes attempts to stick your neck out on behalf of your existing beliefs.
I agree that this is a dangerous use of the original quote, which I admitted can be put to both good and bad uses.
I probably shouldn’t have invoked you and definitely shouldn’t have used “emotionless.”
Does anyone know the origin of this notion (that being wrong is the best outcome of an argument?). It strikes me as basically a founding principle of rationality and I’d like to know the earliest public reference to/ discussion of it. Alternately, is this sentiment summarized in any good quotes? It is hugely important for Hegel but he isn’t, you know, pithy.
This kind of sentiment pops up in Plato a lot, esp. in discussions of rhetoric, like here in Gorgias:
“For I count being refuted a greater good, insofar as it is a greater good to be rid of the greatest evil from oneself than to rid someone else of it. I don’t suppose that any evil for a man is as great as false belief about the things we’re discussing right now.” (458a, Zeyl Translation)
Excellent point. This concept goes squarely with much of Socrates’ philosophy: the wise men knew nothing, and he knew nothing, but he knew it and they didn’t, thus, he was the wisest man alive, as the oracle had said.
In information theory, there’s the concept of the surprisal, which is the logarithm of the inverse of the expected probability of an event. The lower the probability, the higher the surprise(al). The higher the surprisal, the greater the information content.
(Intuitively, the less likely something is, the more you change your beliefs upon learning it.)
So, yeah, it’s pretty enshrined in information theory. Entropy is equivalent to the (oxymoronic) “expected surprisal”. That is, given a discrete probability distribution over events, the probability-weighted average surprisal is the entropy.
Incidentally, as part of a project to convert all of the laws of physics into information-theoretic form, I realized that the elastic energy of a deformable body tells you its probability of being in that state, and (by the above argument), it’s information content. That means you can explain failure modes in terms of the component being forced to store more information than it’s capable of.
You seem like as good a person to ask this as any: Is there a good introduction to information theory out there? How would one start digging into the field?
To be quite honest, I only really started to study it after reading Eliezer Yudkowsky’s Engines of Cognition, which connected it to what I know about thermodynamics. ( Twoblog posts inspired by it.) So, like you, I’m an autodidact on the topic.
Most people would recommend David MacKay’s downloadable book, which is written in a friendly, accessible tone. That helped a lot, but I also found it hard to follow at times. That may be due to not having a physical copy though. And it can’t be beat as a technical reference or in terms of depth.
Personally, my path to learning about it was to basically read the Wikipedia articles on Information Theory and Kullback-Leibler divergence, and every relevant, interesting link that branches off from those (on or off wikipedia).
ETA: Oh, and learning about statistical mechanics, especially the canonical ensemble was a big help for me too, esp. given the relation to the E. T. Jaynes articles on the maximum entropy formalism. But YMMV.
That’s a terrible quote. Being wrong is the best possible outcome of an argument, as it’s the one with the highest expected knowledge gain (unless you’re a hardcore altruist who doesn’t value their own knowledge differently from anyone else’s).
I agree with you, but I don’t think that makes it a terrible quote. I personally don’t seem to be psychologically able to avoid that awful sinking feeling when I realise I’m wrong, and it does suck. But recognising that it sucks is an important part of allowing the sinking feeling to wash over you, not be personally offended by it, and realise that if you update on this piece of wrongness, you’re slightly less likely to be wrong again next time. For me at least, if I just try to pretend the sinking feeling isn’t happening, because “rationally” it shouldn’t, it just means I’m pretending the wrongness itself isn’t happening. And that’s a bad idea.
Emotions help anchor new knowledge. That horrible sinking feeling helps you to remember your screw up, so you don’t do it again. I suspect people that keep making the same mistakes are those who try to hide their mistakes from themselves and avoid that feeling.
OTOH, you could just compare the feeling of being proven wrong to the feeling of being constipated with hemorrhoids and not being able to urinate, and reckon that you’re getting off pretty easy.
No, that’s backwards. Learning that you are wrong is good if and only if you are wrong. But it’s only good because you were already wrong, which was bad—you were making bad decisions before. It’s like saying that it is better to win the lottery than to be born rich. Roughly speaking, it doesn’t matter when or where the money or knowledge comes from, only that you can use it.
So if you are surprised to find a $20 bill in your couch, your disappointment at having lost $20 some time in the past is equal to your pleasure at now having $20 more than you did a moment ago?
My current level of ignorance is a fact of life, I already know that there must be things that I’m wrong about. How is finding out something in particular that I am wrong about anything but a positive outcome?
That depends rather a lot on my dopamine levels and thought patterns. I gain much more pleasure from finding cash than I am disappointed at losing it. Hang on… Excuse me. Going for a walk around my house with my wallet open.
Careful… diminishing returns still apply ;)
roughly, yes.
If your mistakes are independent, then correcting one of them doesn’t (much) correct your estimate of how many more mistakes you have to correct. Say you have 21 beliefs with 95% confidence and an argument clarifies a random one of them. You still have 1 expected wrong belief. By independence, we might as well say it’s belief #1 that gets clarified. People who were wrong about it end up the same as people who were right about it. Yes, they gained more information, but they were really just unlucky to start with less information. This is exactly the lottery/inheritance model.
Yes, your ignorance is a fact, but it’s not a fact accessible to you. The argument decreases your estimate of your ignorance by the same amount, regardless of whether you win or lose. If you happen to know how ignorant you are, how many items you’re wrong about, then the situation is different, but that’s a lot less realistic than independence.
So if I understand the point you’re making: Losing an argument provides enough evidence of your prior ignorance to prevent any net gain in your expectation of your own overall knowledgeability, at least relative to winning the argument.
I don’t disagree, but I don’t know why I’d care to base an emotional response on this kind of evaluation. I’m not fretting over my absolute position on the axis of knowledge, I’m just hill climbing. It’s the first derivative that my decisions affect, not the initial constant.
It isn’t that winning the lottery is better than being born rich, it’s that winning the lottery is better than not winning the lottery. Even if you’re already rich, winning the lottery is good. Presumably you weren’t born right about everything, which means it’s more useful to lose arguments than win them. After all, if you never lose an argument, what’s more likely: that you are right about everything, that you’re the best arguer ever, or that you simply don’t argue things you’re wrong about?
My first thought was b). What was the intended response?
Or that you are right about everything that you believe in strongly enough to argue about.
In other words, avoid believing strongly in the absence of evidence. And don’t argue where you don’t have the facts on your side.
Presumably, not (a). In both other cases you’ve managed to not notice you’re wrong.
It’s more so a terrible quote because it is unwise to have a significant emotional attitude towards finding out you’re wrong, because this will tend to reinforce irrational defense mechanisms (“Let’s agree to disagree!”). The purpose of argument is, I hope, to improve your understanding of the world, so even if you shouldn’t be thrilled to find yourself wrong, you shouldn’t be afraid of doing so.
Yes, it is unwise to have such emotional attitudes, but you don’t get rid of them by saying that they are bad. Honestly acknowledging their existence, as in the original quote, is probably a better route to their elimination than an emotionless assertion that losing arguments is good. The quote, on its own, probably doesn’t do much good, and perhaps does some harm, but I think it is probably a better step to accomplishing loqi’s goal than his phrasing.
Possibly, but I certainly wasn’t advocating an emotionless response. Fight fire with fire! If you realize you’re feeling stupid for having been wrong instead of feeling excited to have learned something, go ahead and feel stupid for feeling stupid.
I think I understand the rationale behind the original quote: Being wrong feels awful, so you should try to be right as often as possible. But this emotional response also disincentivizes attempts to stick your neck out on behalf of your existing beliefs.
One might counter that a positive emotional response to being wrong provides an incentive for being wrong in the first place just so you can feel good about discarding your flawed beliefs in the future. This strikes me as a far less plausible mechanism than the above.
I agree that this is a dangerous use of the original quote, which I admitted can be put to both good and bad uses.
I probably shouldn’t have invoked you and definitely shouldn’t have used “emotionless.”
Does anyone know the origin of this notion (that being wrong is the best outcome of an argument?). It strikes me as basically a founding principle of rationality and I’d like to know the earliest public reference to/ discussion of it. Alternately, is this sentiment summarized in any good quotes? It is hugely important for Hegel but he isn’t, you know, pithy.
This kind of sentiment pops up in Plato a lot, esp. in discussions of rhetoric, like here in Gorgias:
“For I count being refuted a greater good, insofar as it is a greater good to be rid of the greatest evil from oneself than to rid someone else of it. I don’t suppose that any evil for a man is as great as false belief about the things we’re discussing right now.” (458a, Zeyl Translation)
Excellent point. This concept goes squarely with much of Socrates’ philosophy: the wise men knew nothing, and he knew nothing, but he knew it and they didn’t, thus, he was the wisest man alive, as the oracle had said.
In information theory, there’s the concept of the surprisal, which is the logarithm of the inverse of the expected probability of an event. The lower the probability, the higher the surprise(al). The higher the surprisal, the greater the information content.
(Intuitively, the less likely something is, the more you change your beliefs upon learning it.)
So, yeah, it’s pretty enshrined in information theory. Entropy is equivalent to the (oxymoronic) “expected surprisal”. That is, given a discrete probability distribution over events, the probability-weighted average surprisal is the entropy.
Incidentally, as part of a project to convert all of the laws of physics into information-theoretic form, I realized that the elastic energy of a deformable body tells you its probability of being in that state, and (by the above argument), it’s information content. That means you can explain failure modes in terms of the component being forced to store more information than it’s capable of.
Well, it’s interesting to me.
You seem like as good a person to ask this as any: Is there a good introduction to information theory out there? How would one start digging into the field?
To be quite honest, I only really started to study it after reading Eliezer Yudkowsky’s Engines of Cognition, which connected it to what I know about thermodynamics. ( Two blog posts inspired by it.) So, like you, I’m an autodidact on the topic.
Most people would recommend David MacKay’s downloadable book, which is written in a friendly, accessible tone. That helped a lot, but I also found it hard to follow at times. That may be due to not having a physical copy though. And it can’t be beat as a technical reference or in terms of depth.
Personally, my path to learning about it was to basically read the Wikipedia articles on Information Theory and Kullback-Leibler divergence, and every relevant, interesting link that branches off from those (on or off wikipedia).
ETA: Oh, and learning about statistical mechanics, especially the canonical ensemble was a big help for me too, esp. given the relation to the E. T. Jaynes articles on the maximum entropy formalism. But YMMV.