As an example of something that didn’t seem ‘beyond’ LW, Rand’s Razor seems like a useful habit for dispensing with perverse concepts like grue and bleen, but Follow the Improbability feels like the better version of it. Like, how is ‘necessity’ measured?
Ayn Rand wrote a ton of material on concept-formation: some of it is in ITOE, and some of it is scattered amongst essays on other topics. For example, her essay “The Art of Smearing” opens by examining the use of the flawed concept “extremism” by certain political groups to attack their opponents, and then opens out into a discussion of the formation of “anti-concepts” in general, and their effects on cognition. She has several essays of a similar nature.
The downside of such an approach is that there isn’t one unified resource you can point people to that explains Objectivist epistemology in detail.
Objectivist epistemology is hard to summarize—it includes some useful cognitive heuristics (like “Rand’s razor” and the idea of checking for “package-deal” concepts or other invalid concepts), but the whole package includes a lot more than those heuristics. It’s also integrated down to philosophical fundamentals—including, most notably, an attempt at carving a third position to the Platonism vs. nominalism debate over universals in traditional philosophy. However, that is quite hard to summarize in a comment. The best resources I can point to are either ITOE (which is admittedly dated), or Harry Binswanger’s book How We Know.
But just for the purposes of illustration, the Objectivist answer to the grue/bleen issue would be that blue and green are the more basic concepts because they are directly derived from perceptual experience. (Following Aristotle, Objectivism holds that perception is the ultimate base of all knowledge.) The idea that “a green object may turn blue for some unknown reason at some future date” is an arbitrary claim with no evidential basis. If a given green object did change colour, one should determine the cause of the colour change prior to defining a new concept to categorize the object under. (And, realistically, given present scientific knowledge, there isn’t much reason to believe that any given object will begin changing colour for no known reason.)
Likewise, the Objectivist answer to the blegg/rube issue (https://www.lesswrong.com/posts/4FcxgdvdQP45D6Skg/disguised-queries) would be that there is too much overlap between the two concepts (as presented), and that therefore the two concepts are not valid. This is not to say that there is no distinction between the objects, but, absent any knowledge of why certain properties tend to appear together, the individual objects being categorized would be better described as simply “palladium-containing, blue, egg-shaped object” or whatever.
One does not form concepts simply by clustering together objects with similar properties, but also by identifying causal connections between the properties. In the case of bleggs and rubes, if one could show that the fact an object contained palladium tended to cause it to be blue and egg-shaped, then you would have a basis for forming an objective concept of “blegg”.
Compare with examples in biology, where there is no confusion over whether, say, a duck-billed platypus is a bird or a mammal.
The platypus shares similar properties with other mammals because of a similar underlying cause (it has similar DNA, due to shared ancestry) but has some differences with other mammals, because it is further away on the biological “family tree”—but its most recent common ancestor with the other mammals is far more recent than with the birds or reptiles, so it should objectively be categorized as a mammal. On the other hand, Objectivism also agrees with LW in holding that there is no answer to the question of whether a certain object is really a blegg, or whether a certain animal is really a mammal. All concepts are simply human-created categorization schemes, evaluated by the extent to which they track genuine distinctions in reality (a genuine distinction, such as that between mammals and birds, usually being supported by the identification of an underlying cause that gives rise to the distinction, such as differences in DNA).
As you can see, this is only scratching the surface of a very complex topic, so I’ll leave the above as simply an indication of the Objectivist approach. Otherwise, yes, you are correct that there’s a decent amount of material in the modern-day “rationality corpus” that isn’t integrated into Objectivism.
(Edit: I just read the linked SSC article that discusses Aristotelian epistemology. His description doesn’t really apply to Objectivism, which says that knowledge is reached via induction (from observations), not deduction—though many Objectivists, like Scott’s campus Objectivist friends, unknowingly practice the wrong methodology (e.g., misuse of logic and overuse of deductive reasoning). Likewise, though Objectivism holds that (contextual) certainty is possible, it also makes clear that claims can also be classed as “possible” or “probable”, with probability increasing as evidence is accumulated. Finally, because knowledge is integrated, if one of your beliefs is proved wrong, one should seek out other related beliefs that may now be disproved.)
It would be interesting to work on an integration of Objectivism with more modern approaches, though I personally have other projects to work on which are more of a priority for me personally. However, I’ve pondered the idea of setting up an online community that would be a kind of gathering place for such work.
As for the other comment’s question on practical uses of Objectivist epistemology, off the top of my head I can think of: The Logical Leap by David Harriman (an attempt at solving the problem of induction), the startup Mystery Science (elementary science education based on an Objectivist approach), Jean Moroney’s website/consultancy Thinking Directions (similar type of material to CFAR) and Boom Supersonic by Blake Scholl (supersonic jet startup, Blake said he developed the original idea by applying some of Jean Moroney’s thinking techniques).
Ayn Rand wrote a ton of material on concept-formation: some of it is in ITOE, and some of it is scattered amongst essays on other topics. For example, her essay “The Art of Smearing” opens by examining the use of the flawed concept “extremism” by certain political groups to attack their opponents, and then opens out into a discussion of the formation of “anti-concepts” in general, and their effects on cognition. She has several essays of a similar nature.
My prediction, having read a few of these, is that I will agree with them more than I disagree with them; when she points to someone making an error, at least 90% of the time they’ll actually be making an error. I think the phrase ‘anti-pattern’ is more common on LW than ‘anti-concept’, but they seem overall the same and to have similar usages. (37 Ways That Words Can Be Wrong feels like the good similar example from LW.)
That said, there’s a somewhat complicated point here that was hammered home for me by thinking about causal reasoning. Specifically, humans are pretty good at intuitive causal reasoning, and so philosophers discussing the differences between causal decision theory and evidential decision theory found it easy to compute ‘what CDT would say’ for a particular situation, by checking what their intuitive causal sense of the situation was.
But some situations are very complicated; see figure 1 of this paper, for example. In order to do causal reasoning in an environment like that, it helps if it’s ‘math so simple a computer could do it,’ which involves really getting to the heart of the thing and finding the simple core.
From what I can tell, the Objectivists are going after the right sort of thing (the point of concepts is to help with reasoning to achieve practical ends in the real world, i.e. rationalists should win and beliefs should pay rent in anticipated experience), and so I’m unlikely to actually uncover any fundamental disagreements in goals. [Even on the probabilistic front, you could go from Peikoff’s “knowledge is contextual” to a set-theoretic definition of probability and end up Bayesian-ish.]
But it feels to me like… it should be easy to summarize, in some way? Or, like, the ‘LW view’ has a lot of “things it’s against” (the whole focus on heuristics and biases seems important here), and “things it’s for” (‘beliefs should pay rent’ feels like potentially a decent summary here), and it feels like it has a clear view of both of them. I feel like the Objectivist epistemology is less clear about the “things it’s for”, potentially obscured by being motivated mostly by the “things it’s against.” Like, I think LW gets a lot of its value by trying to get down to the level of “we could program a computer to do it,” in a way that requires peering inside some cognitive modules and algorithms that Rand could assume her audience had.
Compare with examples in biology, where there is no confusion over whether, say, a duck-billed platypus is a bird or a mammal.
Tho I do think the history of taxonomy in biology includes many examples of confused concepts and uncertain boundaries. Even the question of “what things are alive?” runs into perverse corner cases.
These are good questions.
Ayn Rand wrote a ton of material on concept-formation: some of it is in ITOE, and some of it is scattered amongst essays on other topics. For example, her essay “The Art of Smearing” opens by examining the use of the flawed concept “extremism” by certain political groups to attack their opponents, and then opens out into a discussion of the formation of “anti-concepts” in general, and their effects on cognition. She has several essays of a similar nature.
The downside of such an approach is that there isn’t one unified resource you can point people to that explains Objectivist epistemology in detail.
Objectivist epistemology is hard to summarize—it includes some useful cognitive heuristics (like “Rand’s razor” and the idea of checking for “package-deal” concepts or other invalid concepts), but the whole package includes a lot more than those heuristics. It’s also integrated down to philosophical fundamentals—including, most notably, an attempt at carving a third position to the Platonism vs. nominalism debate over universals in traditional philosophy. However, that is quite hard to summarize in a comment. The best resources I can point to are either ITOE (which is admittedly dated), or Harry Binswanger’s book How We Know.
But just for the purposes of illustration, the Objectivist answer to the grue/bleen issue would be that blue and green are the more basic concepts because they are directly derived from perceptual experience. (Following Aristotle, Objectivism holds that perception is the ultimate base of all knowledge.) The idea that “a green object may turn blue for some unknown reason at some future date” is an arbitrary claim with no evidential basis. If a given green object did change colour, one should determine the cause of the colour change prior to defining a new concept to categorize the object under. (And, realistically, given present scientific knowledge, there isn’t much reason to believe that any given object will begin changing colour for no known reason.)
Likewise, the Objectivist answer to the blegg/rube issue (https://www.lesswrong.com/posts/4FcxgdvdQP45D6Skg/disguised-queries) would be that there is too much overlap between the two concepts (as presented), and that therefore the two concepts are not valid. This is not to say that there is no distinction between the objects, but, absent any knowledge of why certain properties tend to appear together, the individual objects being categorized would be better described as simply “palladium-containing, blue, egg-shaped object” or whatever.
One does not form concepts simply by clustering together objects with similar properties, but also by identifying causal connections between the properties. In the case of bleggs and rubes, if one could show that the fact an object contained palladium tended to cause it to be blue and egg-shaped, then you would have a basis for forming an objective concept of “blegg”.
Compare with examples in biology, where there is no confusion over whether, say, a duck-billed platypus is a bird or a mammal.
The platypus shares similar properties with other mammals because of a similar underlying cause (it has similar DNA, due to shared ancestry) but has some differences with other mammals, because it is further away on the biological “family tree”—but its most recent common ancestor with the other mammals is far more recent than with the birds or reptiles, so it should objectively be categorized as a mammal. On the other hand, Objectivism also agrees with LW in holding that there is no answer to the question of whether a certain object is really a blegg, or whether a certain animal is really a mammal. All concepts are simply human-created categorization schemes, evaluated by the extent to which they track genuine distinctions in reality (a genuine distinction, such as that between mammals and birds, usually being supported by the identification of an underlying cause that gives rise to the distinction, such as differences in DNA).
As you can see, this is only scratching the surface of a very complex topic, so I’ll leave the above as simply an indication of the Objectivist approach. Otherwise, yes, you are correct that there’s a decent amount of material in the modern-day “rationality corpus” that isn’t integrated into Objectivism.
(Edit: I just read the linked SSC article that discusses Aristotelian epistemology. His description doesn’t really apply to Objectivism, which says that knowledge is reached via induction (from observations), not deduction—though many Objectivists, like Scott’s campus Objectivist friends, unknowingly practice the wrong methodology (e.g., misuse of logic and overuse of deductive reasoning). Likewise, though Objectivism holds that (contextual) certainty is possible, it also makes clear that claims can also be classed as “possible” or “probable”, with probability increasing as evidence is accumulated. Finally, because knowledge is integrated, if one of your beliefs is proved wrong, one should seek out other related beliefs that may now be disproved.)
It would be interesting to work on an integration of Objectivism with more modern approaches, though I personally have other projects to work on which are more of a priority for me personally. However, I’ve pondered the idea of setting up an online community that would be a kind of gathering place for such work.
As for the other comment’s question on practical uses of Objectivist epistemology, off the top of my head I can think of: The Logical Leap by David Harriman (an attempt at solving the problem of induction), the startup Mystery Science (elementary science education based on an Objectivist approach), Jean Moroney’s website/consultancy Thinking Directions (similar type of material to CFAR) and Boom Supersonic by Blake Scholl (supersonic jet startup, Blake said he developed the original idea by applying some of Jean Moroney’s thinking techniques).
My prediction, having read a few of these, is that I will agree with them more than I disagree with them; when she points to someone making an error, at least 90% of the time they’ll actually be making an error. I think the phrase ‘anti-pattern’ is more common on LW than ‘anti-concept’, but they seem overall the same and to have similar usages. (37 Ways That Words Can Be Wrong feels like the good similar example from LW.)
That said, there’s a somewhat complicated point here that was hammered home for me by thinking about causal reasoning. Specifically, humans are pretty good at intuitive causal reasoning, and so philosophers discussing the differences between causal decision theory and evidential decision theory found it easy to compute ‘what CDT would say’ for a particular situation, by checking what their intuitive causal sense of the situation was.
But some situations are very complicated; see figure 1 of this paper, for example. In order to do causal reasoning in an environment like that, it helps if it’s ‘math so simple a computer could do it,’ which involves really getting to the heart of the thing and finding the simple core.
From what I can tell, the Objectivists are going after the right sort of thing (the point of concepts is to help with reasoning to achieve practical ends in the real world, i.e. rationalists should win and beliefs should pay rent in anticipated experience), and so I’m unlikely to actually uncover any fundamental disagreements in goals. [Even on the probabilistic front, you could go from Peikoff’s “knowledge is contextual” to a set-theoretic definition of probability and end up Bayesian-ish.]
But it feels to me like… it should be easy to summarize, in some way? Or, like, the ‘LW view’ has a lot of “things it’s against” (the whole focus on heuristics and biases seems important here), and “things it’s for” (‘beliefs should pay rent’ feels like potentially a decent summary here), and it feels like it has a clear view of both of them. I feel like the Objectivist epistemology is less clear about the “things it’s for”, potentially obscured by being motivated mostly by the “things it’s against.” Like, I think LW gets a lot of its value by trying to get down to the level of “we could program a computer to do it,” in a way that requires peering inside some cognitive modules and algorithms that Rand could assume her audience had.
Tho I do think the history of taxonomy in biology includes many examples of confused concepts and uncertain boundaries. Even the question of “what things are alive?” runs into perverse corner cases.