Recommending a way into Korzybski is difficult. I read Korzybski the hard way, having been inspired to seek out “Science and Sanity” at the age of about 15, by mentions of K and General Semantics in some of Heinlein’s stories, and found it was in the public library. (Ah, the Golden Age of Science Fiction—being 15 years old.) I found it extraordinarily laborious, but about ten years later I read it again and found it as clear as day (albeit a rather long day). I’ve heard of others having a similar experience. During my earlier exposure I also read Hayakawa, and Stuart Chase’s “The Tyranny of Words” and “The Power of Words” (but I think Chase goes overboard on dismissing abstractions as meaningless).
The thing is, though, all of these books are rather dated now. I can read them now and see what I saw in them then, but I question whether they will work for a modern reader. What more modern sources are there for these ideas? The General Semantics movement itself seems to have produced little, although the Institute of General Semantics and the International Society for General Semantics still trundle on. One of them produced a 5th edition of S&S, but it adds little but page count to the 4th.
I’ll sound like an Eliezer fanboy, but the best answer I know to the question, “if not Korzybski, then what?” is the Sequences. Some themes of Korzybski:
Implications of modern science for understanding reality. If what Eliezer writes on QM is not always quite right, it is not so very wrong. The same could be said of K’s knowledge of the science of his time. I judge this not by my own knowledge of the subject (which is small) but the absence of anyone showing up to demolish swathes of it as nonsense. The most contentious thing seems to be whether MWI vs. Copenhagen is as much a slam dunk as EY believes, but I see that issue as only a sideshow anyway.
The inexhaustibility of the physical object, as opposed to our finite ideas of the object. (“More can be said about a single apple than about all the apples in the world.”—EY in “Twelve Virtues”)
“The map is not the territory.” (A point that I believe was first made in those words by K.) As someone else put it, if you draw a line on the map, two lanes of blacktop do not appear under your feet.
The principle of non-identity: nothing (quantum-mechanical arcana aside) is identical to anything else. When you describe two things with the same words, they are still different things.
And there are some topics in the sequences not covered by K, but which are essential to good thinking:
Causality. To borrow Dobzhansky’s dictum on the role of evolution in biology, nothing in science makes sense except in the light of causality. I think it’s something that goes under-appreciated on LessWrong, even by Eliezer, despite what he’s written on the subject.
Neuroscience. Implications for “free will”, “agency”, “self-modification”, “identity”, etc. I think a lot of this will date as much as K’s account of “colloidal” mechanisms, but however it turns out (if we knew, we’d be there already) it’s what we have right now, and it matters.
Weaknesses of the Sequences:
S&S is copiously referenced; the Sequences sparsely.
Causality is not treated as well as it might be. Too much emphasis on statistics, not enough on mechanisms. (ETA: And it took someone else to point out to Eliezer the distinction between Bayesian graphs and causal graphs.) I would be interested to see Eliezer’s response to Judea Pearl’s paper “Why I am only a half-Bayesian.” And systems of circular causality and the paradoxical phenomena they produce get almost no notice.
An excessive emphasis on Bayesian reasoning as sufficient for salvation, er, solving every problem. I’m quite willing to grant it a fundamental status in principle, and its heuristic usefulness when actual numbers are not and cannot be available. (Which Eliezer himself has said—this is more a fault of LW than the Sequences.) But until someone comes up with an effective method of finding better models when the probability of the observations given any existing model has fallen to absurdly low levels, talk of “Bayesian superintelligences” is moonshine. I mean “effective” here in both senses: both practical and defined as an algorithm. Such a method would be equivalent to an AGI, so I’m not holding my breath. What does one do in the meantime? There are answers (e.g. Andrew Gelman on model checking), but are they good enough?
I would be interested to see Eliezer’s response to Judea Pearl’s paper “Why I am only a half-Bayesian.”
I believe I saw that at some point, actually—I think I’m thinking of this comment and the associated thread, which unfortunately doesn’t have much in a way of a response.
Hm, the context there wasn’t originally about causality, which seems to come in as an aside. As far as I can judge from it, Eliezer is saying that causal models are things you can have probability distributions over, to be updated by evidence, just like every other sort of model of reality. They are not a separate magisterium from probability, and there is no need to be only half a Bayesian. That causality is not a statistical concept is of no more significance than that velocity is not a statistical concept.
It’s not clear to me what he intended by mentioning Pearl’s paper. As far as I know, Pearl nowhere considers probability distributions over causal models. He considers them only from the point of view of answering two-valued questions such as “Are these data consistent with this causal model?” or “Given this causal model, do these observations of some of its variables allow estimation of the magnitude of these causal effects?” He does not study questions such as “what causal model is most likely, given these data?” Some do, although as yet that seems to be limited to one group of researchers in neuroscience.
Recommending a way into Korzybski is difficult. I read Korzybski the hard way, having been inspired to seek out “Science and Sanity” at the age of about 15, by mentions of K and General Semantics in some of Heinlein’s stories, and found it was in the public library. (Ah, the Golden Age of Science Fiction—being 15 years old.) I found it extraordinarily laborious, but about ten years later I read it again and found it as clear as day (albeit a rather long day). I’ve heard of others having a similar experience. During my earlier exposure I also read Hayakawa, and Stuart Chase’s “The Tyranny of Words” and “The Power of Words” (but I think Chase goes overboard on dismissing abstractions as meaningless).
The thing is, though, all of these books are rather dated now. I can read them now and see what I saw in them then, but I question whether they will work for a modern reader. What more modern sources are there for these ideas? The General Semantics movement itself seems to have produced little, although the Institute of General Semantics and the International Society for General Semantics still trundle on. One of them produced a 5th edition of S&S, but it adds little but page count to the 4th.
I’ll sound like an Eliezer fanboy, but the best answer I know to the question, “if not Korzybski, then what?” is the Sequences. Some themes of Korzybski:
The distinction between word and thing, the ladder of abstraction. See a tree falls in the forest, words being wrong, and more.
Implications of modern science for understanding reality. If what Eliezer writes on QM is not always quite right, it is not so very wrong. The same could be said of K’s knowledge of the science of his time. I judge this not by my own knowledge of the subject (which is small) but the absence of anyone showing up to demolish swathes of it as nonsense. The most contentious thing seems to be whether MWI vs. Copenhagen is as much a slam dunk as EY believes, but I see that issue as only a sideshow anyway.
The inexhaustibility of the physical object, as opposed to our finite ideas of the object. (“More can be said about a single apple than about all the apples in the world.”—EY in “Twelve Virtues”)
“The map is not the territory.” (A point that I believe was first made in those words by K.) As someone else put it, if you draw a line on the map, two lanes of blacktop do not appear under your feet.
The principle of non-identity: nothing (quantum-mechanical arcana aside) is identical to anything else. When you describe two things with the same words, they are still different things.
And there are some topics in the sequences not covered by K, but which are essential to good thinking:
Causality. To borrow Dobzhansky’s dictum on the role of evolution in biology, nothing in science makes sense except in the light of causality. I think it’s something that goes under-appreciated on LessWrong, even by Eliezer, despite what he’s written on the subject.
Neuroscience. Implications for “free will”, “agency”, “self-modification”, “identity”, etc. I think a lot of this will date as much as K’s account of “colloidal” mechanisms, but however it turns out (if we knew, we’d be there already) it’s what we have right now, and it matters.
Weaknesses of the Sequences:
S&S is copiously referenced; the Sequences sparsely.
Causality is not treated as well as it might be. Too much emphasis on statistics, not enough on mechanisms. (ETA: And it took someone else to point out to Eliezer the distinction between Bayesian graphs and causal graphs.) I would be interested to see Eliezer’s response to Judea Pearl’s paper “Why I am only a half-Bayesian.” And systems of circular causality and the paradoxical phenomena they produce get almost no notice.
An excessive emphasis on Bayesian reasoning as sufficient for salvation, er, solving every problem. I’m quite willing to grant it a fundamental status in principle, and its heuristic usefulness when actual numbers are not and cannot be available. (Which Eliezer himself has said—this is more a fault of LW than the Sequences.) But until someone comes up with an effective method of finding better models when the probability of the observations given any existing model has fallen to absurdly low levels, talk of “Bayesian superintelligences” is moonshine. I mean “effective” here in both senses: both practical and defined as an algorithm. Such a method would be equivalent to an AGI, so I’m not holding my breath. What does one do in the meantime? There are answers (e.g. Andrew Gelman on model checking), but are they good enough?
I believe I saw that at some point, actually—I think I’m thinking of this comment and the associated thread, which unfortunately doesn’t have much in a way of a response.
Hm, the context there wasn’t originally about causality, which seems to come in as an aside. As far as I can judge from it, Eliezer is saying that causal models are things you can have probability distributions over, to be updated by evidence, just like every other sort of model of reality. They are not a separate magisterium from probability, and there is no need to be only half a Bayesian. That causality is not a statistical concept is of no more significance than that velocity is not a statistical concept.
It’s not clear to me what he intended by mentioning Pearl’s paper. As far as I know, Pearl nowhere considers probability distributions over causal models. He considers them only from the point of view of answering two-valued questions such as “Are these data consistent with this causal model?” or “Given this causal model, do these observations of some of its variables allow estimation of the magnitude of these causal effects?” He does not study questions such as “what causal model is most likely, given these data?” Some do, although as yet that seems to be limited to one group of researchers in neuroscience.