Does anyone know the origin of this notion (that being wrong is the best outcome of an argument?). It strikes me as basically a founding principle of rationality and I’d like to know the earliest public reference to/ discussion of it. Alternately, is this sentiment summarized in any good quotes? It is hugely important for Hegel but he isn’t, you know, pithy.
This kind of sentiment pops up in Plato a lot, esp. in discussions of rhetoric, like here in Gorgias:
“For I count being refuted a greater good, insofar as it is a greater good to be rid of the greatest evil from oneself than to rid someone else of it. I don’t suppose that any evil for a man is as great as false belief about the things we’re discussing right now.” (458a, Zeyl Translation)
Excellent point. This concept goes squarely with much of Socrates’ philosophy: the wise men knew nothing, and he knew nothing, but he knew it and they didn’t, thus, he was the wisest man alive, as the oracle had said.
In information theory, there’s the concept of the surprisal, which is the logarithm of the inverse of the expected probability of an event. The lower the probability, the higher the surprise(al). The higher the surprisal, the greater the information content.
(Intuitively, the less likely something is, the more you change your beliefs upon learning it.)
So, yeah, it’s pretty enshrined in information theory. Entropy is equivalent to the (oxymoronic) “expected surprisal”. That is, given a discrete probability distribution over events, the probability-weighted average surprisal is the entropy.
Incidentally, as part of a project to convert all of the laws of physics into information-theoretic form, I realized that the elastic energy of a deformable body tells you its probability of being in that state, and (by the above argument), it’s information content. That means you can explain failure modes in terms of the component being forced to store more information than it’s capable of.
You seem like as good a person to ask this as any: Is there a good introduction to information theory out there? How would one start digging into the field?
To be quite honest, I only really started to study it after reading Eliezer Yudkowsky’s Engines of Cognition, which connected it to what I know about thermodynamics. ( Twoblog posts inspired by it.) So, like you, I’m an autodidact on the topic.
Most people would recommend David MacKay’s downloadable book, which is written in a friendly, accessible tone. That helped a lot, but I also found it hard to follow at times. That may be due to not having a physical copy though. And it can’t be beat as a technical reference or in terms of depth.
Personally, my path to learning about it was to basically read the Wikipedia articles on Information Theory and Kullback-Leibler divergence, and every relevant, interesting link that branches off from those (on or off wikipedia).
ETA: Oh, and learning about statistical mechanics, especially the canonical ensemble was a big help for me too, esp. given the relation to the E. T. Jaynes articles on the maximum entropy formalism. But YMMV.
Does anyone know the origin of this notion (that being wrong is the best outcome of an argument?). It strikes me as basically a founding principle of rationality and I’d like to know the earliest public reference to/ discussion of it. Alternately, is this sentiment summarized in any good quotes? It is hugely important for Hegel but he isn’t, you know, pithy.
This kind of sentiment pops up in Plato a lot, esp. in discussions of rhetoric, like here in Gorgias:
“For I count being refuted a greater good, insofar as it is a greater good to be rid of the greatest evil from oneself than to rid someone else of it. I don’t suppose that any evil for a man is as great as false belief about the things we’re discussing right now.” (458a, Zeyl Translation)
Excellent point. This concept goes squarely with much of Socrates’ philosophy: the wise men knew nothing, and he knew nothing, but he knew it and they didn’t, thus, he was the wisest man alive, as the oracle had said.
In information theory, there’s the concept of the surprisal, which is the logarithm of the inverse of the expected probability of an event. The lower the probability, the higher the surprise(al). The higher the surprisal, the greater the information content.
(Intuitively, the less likely something is, the more you change your beliefs upon learning it.)
So, yeah, it’s pretty enshrined in information theory. Entropy is equivalent to the (oxymoronic) “expected surprisal”. That is, given a discrete probability distribution over events, the probability-weighted average surprisal is the entropy.
Incidentally, as part of a project to convert all of the laws of physics into information-theoretic form, I realized that the elastic energy of a deformable body tells you its probability of being in that state, and (by the above argument), it’s information content. That means you can explain failure modes in terms of the component being forced to store more information than it’s capable of.
Well, it’s interesting to me.
You seem like as good a person to ask this as any: Is there a good introduction to information theory out there? How would one start digging into the field?
To be quite honest, I only really started to study it after reading Eliezer Yudkowsky’s Engines of Cognition, which connected it to what I know about thermodynamics. ( Two blog posts inspired by it.) So, like you, I’m an autodidact on the topic.
Most people would recommend David MacKay’s downloadable book, which is written in a friendly, accessible tone. That helped a lot, but I also found it hard to follow at times. That may be due to not having a physical copy though. And it can’t be beat as a technical reference or in terms of depth.
Personally, my path to learning about it was to basically read the Wikipedia articles on Information Theory and Kullback-Leibler divergence, and every relevant, interesting link that branches off from those (on or off wikipedia).
ETA: Oh, and learning about statistical mechanics, especially the canonical ensemble was a big help for me too, esp. given the relation to the E. T. Jaynes articles on the maximum entropy formalism. But YMMV.