Ayn Rand wrote a ton of material on concept-formation: some of it is in ITOE, and some of it is scattered amongst essays on other topics. For example, her essay “The Art of Smearing” opens by examining the use of the flawed concept “extremism” by certain political groups to attack their opponents, and then opens out into a discussion of the formation of “anti-concepts” in general, and their effects on cognition. She has several essays of a similar nature.
My prediction, having read a few of these, is that I will agree with them more than I disagree with them; when she points to someone making an error, at least 90% of the time they’ll actually be making an error. I think the phrase ‘anti-pattern’ is more common on LW than ‘anti-concept’, but they seem overall the same and to have similar usages. (37 Ways That Words Can Be Wrong feels like the good similar example from LW.)
That said, there’s a somewhat complicated point here that was hammered home for me by thinking about causal reasoning. Specifically, humans are pretty good at intuitive causal reasoning, and so philosophers discussing the differences between causal decision theory and evidential decision theory found it easy to compute ‘what CDT would say’ for a particular situation, by checking what their intuitive causal sense of the situation was.
But some situations are very complicated; see figure 1 of this paper, for example. In order to do causal reasoning in an environment like that, it helps if it’s ‘math so simple a computer could do it,’ which involves really getting to the heart of the thing and finding the simple core.
From what I can tell, the Objectivists are going after the right sort of thing (the point of concepts is to help with reasoning to achieve practical ends in the real world, i.e. rationalists should win and beliefs should pay rent in anticipated experience), and so I’m unlikely to actually uncover any fundamental disagreements in goals. [Even on the probabilistic front, you could go from Peikoff’s “knowledge is contextual” to a set-theoretic definition of probability and end up Bayesian-ish.]
But it feels to me like… it should be easy to summarize, in some way? Or, like, the ‘LW view’ has a lot of “things it’s against” (the whole focus on heuristics and biases seems important here), and “things it’s for” (‘beliefs should pay rent’ feels like potentially a decent summary here), and it feels like it has a clear view of both of them. I feel like the Objectivist epistemology is less clear about the “things it’s for”, potentially obscured by being motivated mostly by the “things it’s against.” Like, I think LW gets a lot of its value by trying to get down to the level of “we could program a computer to do it,” in a way that requires peering inside some cognitive modules and algorithms that Rand could assume her audience had.
Compare with examples in biology, where there is no confusion over whether, say, a duck-billed platypus is a bird or a mammal.
Tho I do think the history of taxonomy in biology includes many examples of confused concepts and uncertain boundaries. Even the question of “what things are alive?” runs into perverse corner cases.
My prediction, having read a few of these, is that I will agree with them more than I disagree with them; when she points to someone making an error, at least 90% of the time they’ll actually be making an error. I think the phrase ‘anti-pattern’ is more common on LW than ‘anti-concept’, but they seem overall the same and to have similar usages. (37 Ways That Words Can Be Wrong feels like the good similar example from LW.)
That said, there’s a somewhat complicated point here that was hammered home for me by thinking about causal reasoning. Specifically, humans are pretty good at intuitive causal reasoning, and so philosophers discussing the differences between causal decision theory and evidential decision theory found it easy to compute ‘what CDT would say’ for a particular situation, by checking what their intuitive causal sense of the situation was.
But some situations are very complicated; see figure 1 of this paper, for example. In order to do causal reasoning in an environment like that, it helps if it’s ‘math so simple a computer could do it,’ which involves really getting to the heart of the thing and finding the simple core.
From what I can tell, the Objectivists are going after the right sort of thing (the point of concepts is to help with reasoning to achieve practical ends in the real world, i.e. rationalists should win and beliefs should pay rent in anticipated experience), and so I’m unlikely to actually uncover any fundamental disagreements in goals. [Even on the probabilistic front, you could go from Peikoff’s “knowledge is contextual” to a set-theoretic definition of probability and end up Bayesian-ish.]
But it feels to me like… it should be easy to summarize, in some way? Or, like, the ‘LW view’ has a lot of “things it’s against” (the whole focus on heuristics and biases seems important here), and “things it’s for” (‘beliefs should pay rent’ feels like potentially a decent summary here), and it feels like it has a clear view of both of them. I feel like the Objectivist epistemology is less clear about the “things it’s for”, potentially obscured by being motivated mostly by the “things it’s against.” Like, I think LW gets a lot of its value by trying to get down to the level of “we could program a computer to do it,” in a way that requires peering inside some cognitive modules and algorithms that Rand could assume her audience had.
Tho I do think the history of taxonomy in biology includes many examples of confused concepts and uncertain boundaries. Even the question of “what things are alive?” runs into perverse corner cases.