Thanks for the reply. I’ll try to reply comprehensively, sorry if I miss anything. To start with—Aristotle.
What Aristotle Taught
Was Aristotle not an originator of a school of syllogistic logic that treated concepts somewhat similarly to the logical positivists?
I’m going to break this into two parts—the part about logic, and the part about concepts. Logic first. Aristotle indeed wrote six works on logic and reasoning, which are most often collectively called the Organon. Most of it is developing a valid system of syllogistic logic. The really nice part about syllogistic logic is that correct syllogisms are indisputably valid (but not indisputably sound). Aristotle is totally clear about this. He showed—correctly—that logic, correctly applied, makes your conclusions as true as your premises (i.e. logic is valid); but that alone still doesn’t entitle you to certainty about your conclusions, as you can’t trust your premises any more than you could from the start (i.e., validity is not soundness).
In The Parable of Hemlock, ctrl+F “the Greeks.” Eliezer’s issue isn’t with syllogism. It’s with something different: the assertion that “all men are mortal” by definition. Aristotle says nothing of the sort, least of all in the Organon; he just uses the statement as a hypothetical premise to demonstrate the form of valid syllogism, the same way you might use a sample like “all frogs are green, Harold is a frog, Harold is green” as a lesson of validity in a logic class, regardless of whether purple dart frogs exist. The text that most clearly shows this is the Topics, where Aristotle characterizes good arguments as constructed by using syllogism (as characterized in the earlier works of the Organon) or enthymematic syllogism, especially when the syllogism begins from established beliefs (endoxa) as premises. Explicitly, these endoxa like “all men are mortal” are not certain or guaranteed to be true; but they are better than wild speculation, especially if you are trying to persuade someone. So Eliezer’s attack on the Greeks is off base, mistaking the assertion of validity for the assertion of soundness.
There’s nothing wrong with syllogistic logic, as long as you don’t make too much of it. Eliezer’s top-line conclusion is that “logic never dictates any empirical question [with certainty]”; I think you would be extremely hard-pressed to find a sentence in Aristotle which disagrees, and Eliezer’s clear imputation that they did disagree is ignorant and uncharitable. Logic is a useful tool for reasoning from premises you are reasonably confident in, to conclusions you can be similarly confident in.
It’s no straw man to say that Aristotle liked logic. The straw-manning comes when Eliezer asserts that “the Greeks” thought you could derive certain empirical truths from logic alone. (Parmenides, Spinoza, and Kant attempted this, but not Plato, Aristotle, or most philosophers.) Rather, Aristotle’s logic is all about taking established pretty-good beliefs (which are not called certain, but are generally acknowledged and are the best we have to work with) and having a sure way to arrive at exactly equally good beliefs. Putting this in writing was an incredibly valuable contribution to philosophy.
Now for the part about concepts. Did Aristotle treat concepts similarly to the logical positivists? Honestly, I think not; my impression is that the average positivist was a nominalist about the question of universals, while the best summary of Aristotle’s view on the topic probably heavily uses the word hylomorphism. It’s kinda his own deal, like how Plato was Platonist. I don’t love Aristotle’s metaphysics, and I think there are powerful skeptical/nominalist critiques of hylomorphism, which is after all a formalist view of one kind or another. But I don’t think Eliezer really advanced them, or understood Aristotle’s (or any Greek’s) phenomenology of concepts at all. For a little taste of how nuanced Aristotle’s thoughts on words and concepts actually were, here’s another bit from the last book of the Organon:
It is impossible in a discussion to bring in the actual things discussed: we use their names as symbols instead of them; and therefore we suppose that what follows in the names, follows in the things as well, just as people who calculate suppose in regard to their counters. But the two cases (names and things) are not alike. For names are finite and so is the sum-total of formulae, while things are infinite in number. Inevitably, then, the same formulae, and a single name, have a number of meanings. [emphasis added]
Relevant Reading (By philosopher)
specific pre-moderns you think identify and discuss this problem.
If we’re discussing the problem of “gee whiz, in what sense do concepts exist and truthfully inhere in an ever-changing world?” Virtually all of them! Here’s a short rogue’s gallery, take your pick if you’re intrigued by one in particular.
Plato: Plato’s answer is formalism. But even (or especially) if you think that’s absurd, his treatment of the question is incredibly valuable. Plato is deeply aware and deeply disturbed by the fact that the world around him is changeable, that appearances and naively-constructed concepts deceive, and that nothing certain can be found in them. And the core of many of his dialogues are devoted to proving exactly that. Take the Theaetetus, where he talks about certain knowledge. Can we get it by sense perception? Not quite, appearances can deceive. What about judgment? Fallibility would indicate no. Is it justified true belief? Perhaps, but “justification” demands prior knowledge of the thing itself, so this is invalid by circularity! Plato strongly hints at his solution of formalism, but to pave the way to it, he demolishes more standard accounts first by trying to prove the slipperiness of ordinary concepts and the inaccessibility of certainty. Skeptical accounts can find a great deal to like. (Ever wonder why J.L. Mackie’s skeptical “argument from queerness” begins as a steadfast defense of Platonism as the only way to objective morality? For generations, skeptics have made hay by starting with Plato’s objections to others, then attacking Plato’s rehabilitative view as the final step of a deflationary account.) Parmenides is also recommended reading, as most of it is criticism of the theory of forms. But it’s not for the faint of heart, you’ll need some really good secondary lit—or far better, a supportive professor to read through it with. Trying to read and understand it by yourself is an aneurysm risk.
Aristotle: Often denser than Plato. But he’s far more methodical and much easier to interpret, since he’s not writing dialogues with Straussian readings or citing myths which he didn’t believe or any of that artistic jazz. The Nicomachean Ethics may be a good place to see him apply his method of discourse about the natural world, but the writings of his that are most relevant to this conversation are definitely Physics and Metaphysics. (fun fact: the field was named for the book; “meta” is just Greek for “after”, so “Metaphysics” just means “after physics”, “more physics,” or maybe “physics 2″.)
Stoics: Chrysippus is your boy here. He is taken to be one of the first nominalists (a general term for one of the most popular non-realist views, i.e., that universal properties are words alone and not things in their own right). https://iep.utm.edu/chrysipp/#H5 has a summary you might like, and it may be the best we can do, since virtually all of Chrysippus’ actual writings are not extant (his views were passed to us by way of others’ summaries of them), and most other Stoics (like Epictetus or Aurelius) spent more time talking about ethics, with physics receiving more of a passing mention.
Epicureans: Really just Epicurus, as his teachings were passed down by Lucretius in De Rerum Natura. Virtually nothing else from this school is extant, but their influence is very significant. Steadfast materialists, atomists, atheists, and hedonists. This community would like their teachings a lot. I’ll take this opportunity to point out a trend which is commonplace throughout ancient philosophy; Epicureanism is atheist, but the text sings paeans to gods, using them as stand-ins for abstract concepts. This is weird, but not at all rare in ancient philosophy. Anytime you see someone invoke a god or a myth, before dismissing it as superstition, see if it’s useful to treat it as metaphor or conjecture instead. Remember that, for all the talk of gods and myths he engaged in, one of Socrates’ two crimes that he was killed for was impiety.
Skeptics: You will agree with these people less than their names imply you will. They thought some weird stuff; Academic or Pyrrhonian, either way it sometimes comes off as worshiping ignorance. In any case, formalists they were not, and their eponymous attitude comes across in their writings, which are very clear that if there are in fact universals, we are either unable to come to know them, or even morally forbidden to try.
Peripatetics, Cynics, Cyrenaics and more: there are so many ancient Greeks. Many of them may not have written anything of value on this question, I can’t say. This is the part where I confess ignorance of and wonder at the true diversity of Ancient Greek thought.
Another big gap in my knowledge is Christian and medieval thought, but I had enough friends who studied it to understand that my received caricatures of it were misplaced. Aquinas apparently contributed things to metaphysics in the vein of Aristotle. Maybe Augustine has dope metaphysics, no idea. God features prominently, of course, so know thyself and whether that’s a turn-off.
Early Moderns: Spinozism is super weird and monist and stuff. Maybe not that. Kantianism is incomprehensible, even in the original German, but if you can find a good professor to walk you through it (preferably in a classroom environment), there is a reason he was so influential. The obvious suggestion is the Critique of Pure Reason, and it is definitely the one that is relevant here. (It’s where the separation of syntheticity from prioricity comes from! I don’t think it’s a good separation, but you will need to understand what it means if you want to understand many metaphysicians after him, most of all Kripke.) I personally like The Critique of Judgment too.
Continentals: Another gap in my knowledge. A friend read a lot of them and said “there’s no there there”, but I would guess that had as much to do with that friend as the writing itself. Another said Hegel is apparently very fun “in the right state of mind” (I think they meant psychedelics. This is not an endorsement of illegal drug use.) As with other categories on this list, I will acknowledge my ignorance of whatever brilliance might be here. For what it’s worth, if you are interested in critiquing the “classical” method of counterfactual reasoning—or reasoning in general—you may find allies here, even if they are strange bedfellows.
Moderns: Jumping right up to the 1900s. Meinong gets a bad rep but I still like him (do square circles exist? Maybe as much as anything else does!) Russell and Wittgenstein, you cited already. Tarski is also a great one, who created a modal logic (“T-schemas” is a search term you can start with) which is intended to be generalizable over different uses of language. Almost certainly has connections to anything philosophy of language-related. I like Carnap a whole lot, and he did a lot of philosophy of science which you may find relevant. I dislike Kripke a lot, but there’s no question that his thought is an intensely relevant to any philosophy which deals directly with the idea of meaning (he doesn’t think it’s a thing, or at least, wants a deflated version of it to be the norm). He took himself to be in the tradition of Wittgenstein.
Counterfactual Reasoning
I really like, and generally agree with, your summary of how edge cases and obtuse counterexamples have pushed people to somewhat absurd conclusions. I’ll provide some pushback, but first let me indulge myself in agreeing, and providing an example. My undergraduate senior paper employed an unfortunately complex variant of the trolley problem (guess how many tracks were involved?) to contest an arcane ethical principle relevant to a facially absurd variant of utilitarianism. It was truly approaching self-parody, and I was well aware, I just wasn’t sure what other topic I had an idea about which would fill enough pages. (funnily enough, I can write more than enough pages on random internet fora, though.)
For all that ethics should be able to provide us with answers, and there should be answers even for corner cases… it is extremely clear to me that academic ethics has gone over the deep end. Ethical views are now defined based on cases which are often so ridiculous that whatever decision one would make in those situations is probably a noncentral example of ethical or unethical behavior. It’s clear enough to me how we got here, given a certain kind of steadfast realism about ethics, and it’s unclear what exact countervailing view I think should prevail… but somewhere, somehow, we have gone wrong.
Is the source of the problem counterfactual reasoning itself? Perhaps a certain too-strong form of it. But I also think that a mature version of “conceptual engineering” would see a lot of it employed.
I’m sure there are at least a couple other major sources of concept drift and sense accumulation, but I struggle to think of how often counterfactual arguments lead to real linguistic change. Can you provide an example?
The example, or family of examples, that I want to give you and propose as an incredibly useful analogy here, definitely one where there are lots of examples of “concept drift and sense accumulation”, is law. It’s not exactly common usage, but legal language has a bunch of desirable features as an analogy here to apply “conceptual engineering” to. The boundaries of initially-vague concepts like “probable cause” or “slander” are often decided based on past definitions and laid-out sets of necessary and sufficient conditions in case law. But they are also subject to shift when corner cases are encountered which clearly do or don’t fall into the category—previous understandings of the necessary and sufficient conditions be damned. Ultimately, the courts converge on definitions that are useful at the very least, and they use a number of methods to do it, counterfactual reasoning and N&S conditions being some of the tools in the toolbox. Do you think law should dispose of those tools, and do you think it would lead to better decisions if they did? My answer is “no”; I think they’re great pragmatic tools in conjunction with other tools; and that makes me think that N&S conditions and counterfactual reasoning aren’t the real problem here. They can be useful ways to engineer concepts, rather than just a destructive way to attack them with corner cases.
Legal language is also nice because it gives us a clear sense of an evaluative objective, a way to “grade” our engineering project—in a word, we might say “justice.” (Meanwhile, to engineer common language, we might grade based on “clarity” or “intersubjectivity”.) When the existing body of rules and conditions still leave room for doubt, we can employ and develop our terminology to produce results that accord with a notion of justice.
I hope you like that proposed application of the theory. Interested to hear your thoughts on whether it’s fitting, or if not, why not.
Appreciate the thorough response; there are some good recs here. I haven’t read any of Chrysippus, and my knowledge of the Epicureans is limited to their moral philosophy (alongside that of the Stoics). That said, I can’t help but get the feeling you’re negging me a little with the references to skeptics, continentals, and professorial assistance! Fortunately or unfortunately, I’m less a rationalist than my presence here might imply—Bourdieu’s symbolic capital and ethology’s signaling theory are interchangeable in my book. Also fortunately or unfortunately, I’m not a uni student these days, my institutional education concluded a few years back, so I suppose I’ll have to make headway on any texts solo, without professorial help.
A quick meta-note: I think there’s a problem whereby people who study historic philosophy have incentives to steelman their subjects’ ideas and thinking, in order to justify their study. I imagine this claim will be received with some pushback, so I’ll try to break it down to less controversial parts, and we can sum them together. First, I think there are strong incentives in academia for everyone to constantly justify their work. Whether it’s prepping for department socials, getting tenure, applying for grants, or just coming to peace internally with a lifetime dedicated to scholarship, it’s hard to help this subtle narrative of self-justification. Second, I think when we read ancient texts, we’re in a tricky situation. As Wittgenstein once said of Plato,
Perhaps Plato is no good, perhaps he’s very good. How should I know? But if he is good, he’s doing something which is foreign to us. We do not understand.
Perhaps Witt overstates the case, but I feel like we can agree that texts are incredibly “gappy,” as the literary theorist Wolfgang Iser says. That is, so much of texts’ intended meaning resides in metonymic implication, “what can be left unsaid,” contextual situation, etc—and the further we get, culturally and temporally, from these texts, the easier it is to project contemporary schemas onto philosophy past. Not to give you homework, but you may be interested in reading the interview I did with philosopher Jonathan Livengood around the same time I wrote the piece under discussion. We talk a bit about N&S conditions, connections between Plato and positivism, but more relevant to our current discussion, we chatted about secondary sources’ treatment of their subjects. He says:
The danger is more on the side of over-interpreting, or being overly charitable to the target. I just wrapped up a grad seminar on the problem of induction, and we were looking at the historical development of the problem of induction from Hume to 1970. As I pointed out, when you look at Hume, Hume’s great, he’s fun to read, but he’s also deeply confused, and you don’t want to do the following, which is a mistake: If you start with the assumption that Hume was just right, and assume that, if you’re seeing an error it must be an error in your interpretation—if that’s your historiographical approach, you’re not going to understand Hume, you’re going to understand this distorted SuperHume, who knows all these things Hume didn’t know, and can respond to subtle distinctions and complaints that someone living now is able to formulate. That’s not Hume! Hume didn’t have an atomic theory, he didn’t know anything about DNA or evolution; there are tons of things that were not on his radar. He’s not making distinctions we’d want him to make, that a competent philosopher today would make. There’s a real danger writing secondary literature, or generating new interpretations. If you want to publish a book on Hume, you need to say something new, a new angle—what’s new and also responsible to what Hume wrote? It ends up doing new philosophy under the guise of history.
I think it’s hard to litigate this for specific texts, because of their gappiness. We’ll never know, unless/even if we have rich historiographic knowledge, whether we’re being overly charitable or uncharitable. I do think your Aristotle examples are compelling counter-examples to Yudkowsky’s analysis, but looking at some of the other philosophers you mention as being “woke” on concepts… there I’m a little more skeptical. (Kripke I think we should strike off the list, since he’s very explicitly a Wittgensteinian in thought; ditto with many continentals.)
I think it’s worth re-clarifying what I think the historic blindspots of philosophy have been, and the way I believe a style of inquiry has proven unproductive. I know my original piece is both very long, by online standards, and not especially clear structurally.
Essentially, I think that most philosophical projects which fail to appreciate the Wittgensteinian “words don’t work that way” lesson will end up doing lexicographic work, not philosophy. My claim is that, with a concept like “causality” or “justice” or “beauty” (there are dozens of equally contested terms, historically), there is no “there” there. Rather, there are a multitude of continuous, analogically and historically related phenomena which are “close enough” in various ways that, with some extra specification via contextual use, these handles are pragmatically useful. If one seeks to analyze the natural language concept “causality” or “justice” or “beauty” by finding commonalities between the natural language meanings, they will end up doing primarily historic, cultural, and lexicographic work, because these word-bundles are in no way atomic, they are in no way essential. In another culture, or another language, there might be twelve types of causality or justice or beauty. They might conflate justice and beauty as a single term. How, then, does it make any sense to treat these, implicitly, as if they were natural kinds, that is, to look (as many 20th C philosophers do), for an explanation of causality that is robust to all native-English usages, but also has some deep underlying quasi-essence which can be singularly studied, analyzed, and understood? Philosophers in the know today will readily admit there are no natural kinds—species were the last example to cling to, and speciation is very messy and socially constructed, as any undergrad biologist knows. There are only continuities, at least at levels higher than particles, because the world is incredibly complex, and the possible arrangements of matter functionally infinite. (I know very little about physics here, so excuse any ignorance.) Our concept of causality, as Livengood talks about in the interview, is tied up in a long cultural history of moral judgments and norms, in folk theories and historically contingent metaphors. It is not a single coherent “thing.” And its bounds do not relate to intrinsic material forces so much as they do human use. Native speakers will attribute causality in a way that is pragmatic, functional, and social.
In other words, natural language is near-useless, and often counterproductive, in trying to understand natural territories. Until recently, we might remember, plant and animal species were classified by their value to humans—poisonous vs medicinal plants, edible vs nonedible, tame vs wild animals, noble vs base beasts, etc. Imagine, now, a natural philosopher attempting to hash out a concise and robust definition of “noble animals,” separate from a nominalist thread like “they’re all described as noble by humans,” as if there were some property inherent to these organisms, separate from their long cultural and historic understanding by humans. Such a philosopher would find out, perhaps, a bit about human beings, but almost nothing worthwhile about the animals.
This is the situation I see with conceptual analysis. Natural language is a messy, bottom-up taxonomy built around pragmatic functionality, around cultural and social coordination, around human life. Conceptual analysis acts as if there is a “there” there—as if there were some essence of “justice” or “causality” that maps closely to the human concept and yet exists separate from human social and cultural life. I submit there is not.
(These folk might quibble they don’t believe in essences, but as I remark to Jon, my opinion here is that “a classical account of concepts as having necessary and sufficient criteria in the analytic mode is in some way indistinguishable from the belief in forms or essences insofar as, even if you separate the human concept from the thing in the world, if you advance that the human concept has a low-entropy structure which can be described elegantly and robustly, you’re essentially also saying there’s a real structure in the world which goes with it. If you can define X, Y, & Z criteria, you have a pattern, and those analyses assume, if you can describe a concept in a non-messy way, as having regularity, then you’re granting a certain Platonic reality to the concept; the pattern of regularity is a feature of the world.”)
We might consider the meaning of textual “meaning.” It can refer to an author’s intention, or a reader’s interpretation. It can refer to a dictionary definition, or the effect of a cause. All these are present in our language. Literary theorists spent the 20th century arguing over whether meaning just “is” unknowable author intention or diverse reader interpretation or some formal, inherent thing inside a text. (This last position is absurd and untenable, but we’ll set that aside for now.) This “debate” strikes me as a debate not over the world, or the territory, or the nature of reality, but over whether one sense of a term ought to be standard or another. It is fundamentally lexicographic. There are many valuable insights tucked into these incessant theoretical debates, but they suffer from residing inside a fundamentally confused frame. There is no reason for one singular definition of “meaning” to exist; “words don’t work that way.” Many senses have been accumulated, like a snowball, around some initial core. The field ought, in my opinion, to have separated authorially intended meaning from reader-interpreted meaning, called them different terms, and called it a day. I say “ought”—why? On what grounds? Because, while in everyday linguistic use, a polysemous “meaning” might be just fine & functional, within the study of literature, separating intent from interpretation is crucial, and having diverse schools who use the term “meaning” in radically different ways only breeds confusion & unproductive disagreement. It is hard for me to understand why philosophers would ever approach the “causality” bundle as a whole, when it is clearly not in any way a singular concept.
I know many philosophers have attempted to carve up terms more technically, in ways more pragmatically suited to the kinds of inquiries they want to make (Kevin Scharp on truth comes to mind), but many, historically, have not.
Second, any philosopher who takes edge cases seriously in trying to understand natural language does not understand natural language to begin with. Because our words are functional tools carving up a continuous material space, and not one-to-one references to real, discrete objects with essences, they are optimized for real human situations. Much of the fretting over gendered language, or racial language, comes because there is increasing awareness of “edge cases” or “in betweens” that disrupt our clean binaries. Similarly, Pluto’s ambiguous planet/non-planet status comes because it, and other bodies in our solar system, sits awkwardly between cultural categories. There is no such “thing” as a planet. There are various clusters of atoms floating around, of many different sizes and materials, and we’ve drawn arbitrary lines for functional and pragmatic reasons. The best piece I can recommend on this is David Chapman’s “ontological remodeling” (I quibble with his use of “ontological,” but it’s no matter—it shows how cultural and historical, rather than inherent or natural, the concept of “planet” is.)
I’ll quote the philosopher Marcus Arvan here in the hope of clarifying my own often messy thought:
I increasingly think — and so do Millikan, Baz, and Balaguer — that [the analytic] approach to philosophy is doubly wrong. First, it is based on a misunderstanding of language. I think Wittgenstein (and Millikan) were both right to suggest that our words (and concepts) have no determinate meaning. Rather, we use words and concepts in fundamentally, irreducibly messy ways — ways that fluctuate from moment to moment, and from speaker/thinker to speaker/thinker. A simpler way to put this is that our concepts — of “free will”, “justice” etc. — are all, in a certain way, defective. There is no determinate meaning to the terms “free will”, etc., and thus philosophical investigation into what “free will” is will be likely to lead, well, almost everywhere. At times, we use “free will” to refer (vaguely) to “reason-responsiveness”, or to “actual choices”, or whatever — but there is no fact of the matter which of these is really free will. Similarly, as Balaguer points out in another paper, there is no fact of the matter whether Millianism, or Fregeanism, or whatever about the meaning of proper names is right. All of these positions are right — which is just to say none of them are uniquely right. We can, and do, use proper names in a myriad of ways. The idea that there is some fact of the matter about what “free will” picks out, or what names mean, etc., all fundamentally misunderstand natural language.
And there is an even deeper problem: all of it is hollow semantics anyway. Allow me to explain. In his paper on compatibilism and conceptual analysis, Balaguer gives the following example. Two psychologists, or linguists, or whatever are trying to figure out what a “planet” is. They then debate to no end whether Pluto is a planet. They engage in philosophical arguments, thought-experiments, etc. They debate the philosophical implications of both sides of the debate (what follows if Pluto is a planet? What follows if it is not?). Here, Balaguer says, is something obvious: they are not doing astronomy. Indeed, they are not really doing anything other than semantics. And notice: there may not be a fact of the matter of what “planet” refers to, and it does not even matter. What matters is not what the concept refers to (what is a planet?), but rather the stuff in the world beyond the concepts (i.e. how does that thing — Pluto — behave? what is its composition? etc.).
I understand that this critique is focused on 20th C analytic, and that your comment above is focused more on the ancients. But it seems like big picture, what we’re trying to figure out is, “How well-known are these problems? How widespread are philosophical practices which fall into linguistic pitfalls unwittingly?”
Showing my hand, in the nominalist/conceptualist/realist frame, it seems to me that any frame but nominalism is scientifically untenable. Various cog-sci and psych experiments have, in my opinion, disproven conceptualism, whereas the collapse of natural kinds bars, for those empiricists unwilling to believe in the supersensory realm, realism. I do want to explore nominalism more, and probably should have included at least a paragraph on it in this piece. Many regrets! I believe I felt under-educated on topic at the time of writing, but this is a good reminder to read up. From the secondary sources I’ve come across, it seems like the closest analogue to the emerging modern view of language, universals, natural kinds, abstract entities, etc.
(Sidenote: isn’t Aristotle a realist like Plato? Or at least, in the medieval era his legacy became such? I usually see him pitted against nominalism, as one of the orthodoxies nominalism challenged.)
My big-picture understanding of the philosophical history is that a Platonic realism/formalism outcompeted more nominalist or pragmatic contemporaneous views like those of Protagoras (or perhaps the Epicureans!). The diversity of Greek thought seems incontestable, but the “winners” less so. (It’s not for nothing they say all philosophy is footnotes to Plato.) Realist views go on to dominate Western philosophy up until the medieval era, bolstered by the natural incentives of Christian theology. Nominalism emerges, and claims a non-trivial number of philosophers, but never fully replaces more realist, analytic, or rationalist viewpoints. (I include rationalism because the idea of a priori and analytic both, IMO, are fatally undermined by nominalism + the messiness of natural language.) American pragmatism strikes hard against the Hegelian rationalisms of its day, but regrettably makes little long-term impact on analytic. Similarly, Wittgenstein’s warnings are largely ignored by the analytic community, which continues on with conceptual analysis into the present day, as if nothing was the matter with their methods and puzzle-like riddles. (The continentals, for all their problems, did take seriously Wittgenstein’s critique. Foucault’s Archaeology of Knowledge, or Lyotard’s examination of language games, or Bourdieu’s dismissal of essentialism, each come to mind.) I am curious if you’d contest this.
I am still trying to understand why the linguistic critiques of such riddles and paradoxes, by a philosopher as well-known and widely read as Wittgenstein, have not more widely impacted the academic philosophy community. It seems you’re on my side on this one, the issues with contemporary academic philosophy, so allow me to quote some speculation you might find interesting. The first cause is likely self-selection out: whereof one cannot speak, thereof one must be silent. And so it goes with graduate students pilled on later Witt. Second are problems of selection proper: knowledge regimes, and their practitioners who have invested lifetimes in them, do not cede their own follies lightly. Meanwhile, they continue to select students who confirm, rather than challenge, their own intellectual legacies—both unconsciously, because of course they believe their intellectual legacies are more correct or important, and consciously:
A friend who was considering applying to graduate school in philosophy once told me that a professor described what the graduate programs are looking for as follows: they want someone who will be able to “push the ball forward.” The professors want to know that their graduate students will engage with the professors’ problems in a productive way, participating in the same problem-solving methods that the professors use — for example, clarifying puzzles by drawing creative new distinctions involving obscure and highly technical philosophical concepts.
Needless to say, if this is the requirement for becoming a professional philosopher, then quite a few kinds of philosophers need not apply. Such as philosophers who ask questions and resist asserting answers, or philosophers who view the adoption of dogmatic philosophical positions as arbitrary and pointless. Oddly enough, any philosopher with the perspicuity to understand the futility of the puzzle-playing philosophers’ methods will probably struggle to be heard and understood in an American philosophy department today, much less employed. In effect, a kind of blindered credulousness is now a prerequisite for entering and rising in a field that is ostensibly defined by its commitment to unrelenting critical inquiry. (src)
Still, when I learned that philosophers today still take seriously one anothers’ intuitions (and about bizarre, other-worldly counterfactuals) as sources of knowledge about reality, I realized that inexplicable amounts of folly can persist in disciplines. Alas.
Regarding law, that is indeed a good example of counterfactuals shaping language, though I’m not sure how much legal definitions filter into mainstream usage. Either way, legal language really is such a rich area of discussion. Textualist views, which I would previously have dismissed as naive—”there’s no inherent or objective meaning in the words, man! Meanings drift over time!”—have some compelling pragmatic arguments behind them. For one, a Constitutional provision or Congressional law is not the product of a single designer, with a singular spirit of intent, but rather the result of a dynamic process within a committee of rivals. A bill must pass both chambers of Congress and then the Executive chair; at each stage, there will be voters or drafters with very different intentionalities or interpretations of the wording of the law being passed. Textualism, in this frame, is a pragmatic avoidance of this chaotic, distributed intentionality in favor of the one common source of truth: the actual letter of law as written and passed. How can we meaningfully speculate, in such a system, what Congress “intended,” when the reality is a kludge of meanings and interpretations loosely coordinated by the text-at-hand? A second case for textualism is that is prevents bad incentives. If a lawmaker or coalition of lawmakers can create a public impression of the intent, or spirit, of a law, which exists separate from the actual impressions of the voting and drafting representatives, and this intent or spirit is used in court cases, an incentive is created for strategic representation of bills in order to sway future court cases. Third, a textualist might appeal to public transparency of meaning, in the vein of the Stele of Hammurabi. A population must be able to transparently know the rules of the game they are playing. Oliver Wendell Holmes: “We ask, not what this man meant, but what those words would mean in the mouth of a normal speaker of English, using them in the circumstances in which they were used … We do not inquire what the legislature meant; we ask only what the statutes mean.” How they are understood is, from this perspective, more important than the intent—since individuals will act according to the law as understood (and not as intended).
These are the steelmen of textualism—look what happens, however, when it’s applied naively:
“Well, what if anything can we judges do about this mess?” Judge Richard Posner asked that question midway through his opinion in United States v Marshall.′
[...]
The issue in Marshall was whether blotter paper impregnated with the illegal drug LSD counts as a “mixture or substance containing” LSD. The question matters because the weight of the “mixture or substance” generally determines the offender’s sentence. A dose of LSD weighs almost nothing compared to blotter paper or anything else that might be used in a similar way (such as gelatin or sugar cubes). If the weight of the medium counts, a person who sold an enormous amount of pure LSD might receive a much lighter sentence than a person who sold a single dose contained in a medium. Also, the per-dose sentences for sales of LSD would bear an arbitrary relationship to the per-dose sentences for sales of other drugs, because the LSD sentences would be, for all practical purposes, a function of the weight of the medium.
[...]
The majority ruling held that blotters were “a mixture or substance containing” LSD, and therefore part of its weight. “Judge Posner’s dissent argued that the “mixture or substance” language should be interpreted not to include the medium, because the majority’s conclusion led to irrational results-indeed results so irrational that they would be unconstitutional if the statute were not construed differently.”
[...]
Treating the blotter paper as a “mixture or substance containing” LSD produces results that are, according to Judge Posner and Justice Stevens, who dissented in Chapman, “bizarre,” “crazy,” and “loony.”″ Selling five doses of LSD impregnated in sugar cubes would subject a person to the ten-year mandatory minimum sentence; selling 199,999 doses in pure form would not.
How did the court come to this decision?
The Supreme Court used dictionaries to define “mixture,” coming to the conclusion that a blotter fit the definition (“a ‘mixture’ may … consist of two substances blended together so that the particles of one are diffused among the particles of the other”) and that this was sufficient for their ruling. And yet, Strauss writes, this dictionary definition has little to do with normal English use of the word mixture, which would never call a water-soaked piece of paper a “mixture” of paper and water, or a piece of paper soaked in salt water and dried, with the salt crystals remaining, a “mixture” of salt.
A man was sentenced to decades in prison over this. The truth is that Congress almost certainly did not intend to write legislation in which selling five doses of sugar-cube LSD resulted in a higher sentence than 200k pure doses. The situation eerily echoes philosophical discourses I’ve come across. Chalmers, for instance, looking up “engineering” in the dictionary in order to figure out the solution to analytic’s problems is not nearly as harmful as the Marshall ruling. But it equally confused. The map is not the territory, as LessWrongers are fond of saying—and justice is not found in the dictionary.
Hmmm, after giving it a day, I feel like I may have unfairly or unproductively bombarded you here, so know I won’t be offended if I don’t get a response.
I’ll try to read some of the recommendations, and perhaps in a while I can come back to this conversation with more of value to contribute.
Thanks for the reply. I’ll try to reply comprehensively, sorry if I miss anything. To start with—Aristotle.
What Aristotle Taught
I’m going to break this into two parts—the part about logic, and the part about concepts. Logic first. Aristotle indeed wrote six works on logic and reasoning, which are most often collectively called the Organon. Most of it is developing a valid system of syllogistic logic. The really nice part about syllogistic logic is that correct syllogisms are indisputably valid (but not indisputably sound). Aristotle is totally clear about this. He showed—correctly—that logic, correctly applied, makes your conclusions as true as your premises (i.e. logic is valid); but that alone still doesn’t entitle you to certainty about your conclusions, as you can’t trust your premises any more than you could from the start (i.e., validity is not soundness).
In The Parable of Hemlock, ctrl+F “the Greeks.” Eliezer’s issue isn’t with syllogism. It’s with something different: the assertion that “all men are mortal” by definition. Aristotle says nothing of the sort, least of all in the Organon; he just uses the statement as a hypothetical premise to demonstrate the form of valid syllogism, the same way you might use a sample like “all frogs are green, Harold is a frog, Harold is green” as a lesson of validity in a logic class, regardless of whether purple dart frogs exist. The text that most clearly shows this is the Topics, where Aristotle characterizes good arguments as constructed by using syllogism (as characterized in the earlier works of the Organon) or enthymematic syllogism, especially when the syllogism begins from established beliefs (endoxa) as premises. Explicitly, these endoxa like “all men are mortal” are not certain or guaranteed to be true; but they are better than wild speculation, especially if you are trying to persuade someone. So Eliezer’s attack on the Greeks is off base, mistaking the assertion of validity for the assertion of soundness.
There’s nothing wrong with syllogistic logic, as long as you don’t make too much of it. Eliezer’s top-line conclusion is that “logic never dictates any empirical question [with certainty]”; I think you would be extremely hard-pressed to find a sentence in Aristotle which disagrees, and Eliezer’s clear imputation that they did disagree is ignorant and uncharitable. Logic is a useful tool for reasoning from premises you are reasonably confident in, to conclusions you can be similarly confident in.
It’s no straw man to say that Aristotle liked logic. The straw-manning comes when Eliezer asserts that “the Greeks” thought you could derive certain empirical truths from logic alone. (Parmenides, Spinoza, and Kant attempted this, but not Plato, Aristotle, or most philosophers.) Rather, Aristotle’s logic is all about taking established pretty-good beliefs (which are not called certain, but are generally acknowledged and are the best we have to work with) and having a sure way to arrive at exactly equally good beliefs. Putting this in writing was an incredibly valuable contribution to philosophy.
Now for the part about concepts. Did Aristotle treat concepts similarly to the logical positivists? Honestly, I think not; my impression is that the average positivist was a nominalist about the question of universals, while the best summary of Aristotle’s view on the topic probably heavily uses the word hylomorphism. It’s kinda his own deal, like how Plato was Platonist. I don’t love Aristotle’s metaphysics, and I think there are powerful skeptical/nominalist critiques of hylomorphism, which is after all a formalist view of one kind or another. But I don’t think Eliezer really advanced them, or understood Aristotle’s (or any Greek’s) phenomenology of concepts at all. For a little taste of how nuanced Aristotle’s thoughts on words and concepts actually were, here’s another bit from the last book of the Organon:
Relevant Reading (By philosopher)
If we’re discussing the problem of “gee whiz, in what sense do concepts exist and truthfully inhere in an ever-changing world?” Virtually all of them! Here’s a short rogue’s gallery, take your pick if you’re intrigued by one in particular.
Plato: Plato’s answer is formalism. But even (or especially) if you think that’s absurd, his treatment of the question is incredibly valuable. Plato is deeply aware and deeply disturbed by the fact that the world around him is changeable, that appearances and naively-constructed concepts deceive, and that nothing certain can be found in them. And the core of many of his dialogues are devoted to proving exactly that. Take the Theaetetus, where he talks about certain knowledge. Can we get it by sense perception? Not quite, appearances can deceive. What about judgment? Fallibility would indicate no. Is it justified true belief? Perhaps, but “justification” demands prior knowledge of the thing itself, so this is invalid by circularity! Plato strongly hints at his solution of formalism, but to pave the way to it, he demolishes more standard accounts first by trying to prove the slipperiness of ordinary concepts and the inaccessibility of certainty. Skeptical accounts can find a great deal to like. (Ever wonder why J.L. Mackie’s skeptical “argument from queerness” begins as a steadfast defense of Platonism as the only way to objective morality? For generations, skeptics have made hay by starting with Plato’s objections to others, then attacking Plato’s rehabilitative view as the final step of a deflationary account.) Parmenides is also recommended reading, as most of it is criticism of the theory of forms. But it’s not for the faint of heart, you’ll need some really good secondary lit—or far better, a supportive professor to read through it with. Trying to read and understand it by yourself is an aneurysm risk.
Aristotle: Often denser than Plato. But he’s far more methodical and much easier to interpret, since he’s not writing dialogues with Straussian readings or citing myths which he didn’t believe or any of that artistic jazz. The Nicomachean Ethics may be a good place to see him apply his method of discourse about the natural world, but the writings of his that are most relevant to this conversation are definitely Physics and Metaphysics. (fun fact: the field was named for the book; “meta” is just Greek for “after”, so “Metaphysics” just means “after physics”, “more physics,” or maybe “physics 2″.)
Stoics: Chrysippus is your boy here. He is taken to be one of the first nominalists (a general term for one of the most popular non-realist views, i.e., that universal properties are words alone and not things in their own right). https://iep.utm.edu/chrysipp/#H5 has a summary you might like, and it may be the best we can do, since virtually all of Chrysippus’ actual writings are not extant (his views were passed to us by way of others’ summaries of them), and most other Stoics (like Epictetus or Aurelius) spent more time talking about ethics, with physics receiving more of a passing mention.
Epicureans: Really just Epicurus, as his teachings were passed down by Lucretius in De Rerum Natura. Virtually nothing else from this school is extant, but their influence is very significant. Steadfast materialists, atomists, atheists, and hedonists. This community would like their teachings a lot. I’ll take this opportunity to point out a trend which is commonplace throughout ancient philosophy; Epicureanism is atheist, but the text sings paeans to gods, using them as stand-ins for abstract concepts. This is weird, but not at all rare in ancient philosophy. Anytime you see someone invoke a god or a myth, before dismissing it as superstition, see if it’s useful to treat it as metaphor or conjecture instead. Remember that, for all the talk of gods and myths he engaged in, one of Socrates’ two crimes that he was killed for was impiety.
Skeptics: You will agree with these people less than their names imply you will. They thought some weird stuff; Academic or Pyrrhonian, either way it sometimes comes off as worshiping ignorance. In any case, formalists they were not, and their eponymous attitude comes across in their writings, which are very clear that if there are in fact universals, we are either unable to come to know them, or even morally forbidden to try.
Peripatetics, Cynics, Cyrenaics and more: there are so many ancient Greeks. Many of them may not have written anything of value on this question, I can’t say. This is the part where I confess ignorance of and wonder at the true diversity of Ancient Greek thought.
Another big gap in my knowledge is Christian and medieval thought, but I had enough friends who studied it to understand that my received caricatures of it were misplaced. Aquinas apparently contributed things to metaphysics in the vein of Aristotle. Maybe Augustine has dope metaphysics, no idea. God features prominently, of course, so know thyself and whether that’s a turn-off.
Early Moderns: Spinozism is super weird and monist and stuff. Maybe not that. Kantianism is incomprehensible, even in the original German, but if you can find a good professor to walk you through it (preferably in a classroom environment), there is a reason he was so influential. The obvious suggestion is the Critique of Pure Reason, and it is definitely the one that is relevant here. (It’s where the separation of syntheticity from prioricity comes from! I don’t think it’s a good separation, but you will need to understand what it means if you want to understand many metaphysicians after him, most of all Kripke.) I personally like The Critique of Judgment too.
Continentals: Another gap in my knowledge. A friend read a lot of them and said “there’s no there there”, but I would guess that had as much to do with that friend as the writing itself. Another said Hegel is apparently very fun “in the right state of mind” (I think they meant psychedelics. This is not an endorsement of illegal drug use.) As with other categories on this list, I will acknowledge my ignorance of whatever brilliance might be here. For what it’s worth, if you are interested in critiquing the “classical” method of counterfactual reasoning—or reasoning in general—you may find allies here, even if they are strange bedfellows.
Moderns: Jumping right up to the 1900s. Meinong gets a bad rep but I still like him (do square circles exist? Maybe as much as anything else does!) Russell and Wittgenstein, you cited already. Tarski is also a great one, who created a modal logic (“T-schemas” is a search term you can start with) which is intended to be generalizable over different uses of language. Almost certainly has connections to anything philosophy of language-related. I like Carnap a whole lot, and he did a lot of philosophy of science which you may find relevant. I dislike Kripke a lot, but there’s no question that his thought is an intensely relevant to any philosophy which deals directly with the idea of meaning (he doesn’t think it’s a thing, or at least, wants a deflated version of it to be the norm). He took himself to be in the tradition of Wittgenstein.
Counterfactual Reasoning
I really like, and generally agree with, your summary of how edge cases and obtuse counterexamples have pushed people to somewhat absurd conclusions. I’ll provide some pushback, but first let me indulge myself in agreeing, and providing an example. My undergraduate senior paper employed an unfortunately complex variant of the trolley problem (guess how many tracks were involved?) to contest an arcane ethical principle relevant to a facially absurd variant of utilitarianism. It was truly approaching self-parody, and I was well aware, I just wasn’t sure what other topic I had an idea about which would fill enough pages. (funnily enough, I can write more than enough pages on random internet fora, though.)
For all that ethics should be able to provide us with answers, and there should be answers even for corner cases… it is extremely clear to me that academic ethics has gone over the deep end. Ethical views are now defined based on cases which are often so ridiculous that whatever decision one would make in those situations is probably a noncentral example of ethical or unethical behavior. It’s clear enough to me how we got here, given a certain kind of steadfast realism about ethics, and it’s unclear what exact countervailing view I think should prevail… but somewhere, somehow, we have gone wrong.
Is the source of the problem counterfactual reasoning itself? Perhaps a certain too-strong form of it. But I also think that a mature version of “conceptual engineering” would see a lot of it employed.
The example, or family of examples, that I want to give you and propose as an incredibly useful analogy here, definitely one where there are lots of examples of “concept drift and sense accumulation”, is law. It’s not exactly common usage, but legal language has a bunch of desirable features as an analogy here to apply “conceptual engineering” to. The boundaries of initially-vague concepts like “probable cause” or “slander” are often decided based on past definitions and laid-out sets of necessary and sufficient conditions in case law. But they are also subject to shift when corner cases are encountered which clearly do or don’t fall into the category—previous understandings of the necessary and sufficient conditions be damned. Ultimately, the courts converge on definitions that are useful at the very least, and they use a number of methods to do it, counterfactual reasoning and N&S conditions being some of the tools in the toolbox. Do you think law should dispose of those tools, and do you think it would lead to better decisions if they did? My answer is “no”; I think they’re great pragmatic tools in conjunction with other tools; and that makes me think that N&S conditions and counterfactual reasoning aren’t the real problem here. They can be useful ways to engineer concepts, rather than just a destructive way to attack them with corner cases.
Legal language is also nice because it gives us a clear sense of an evaluative objective, a way to “grade” our engineering project—in a word, we might say “justice.” (Meanwhile, to engineer common language, we might grade based on “clarity” or “intersubjectivity”.) When the existing body of rules and conditions still leave room for doubt, we can employ and develop our terminology to produce results that accord with a notion of justice.
I hope you like that proposed application of the theory. Interested to hear your thoughts on whether it’s fitting, or if not, why not.
Appreciate the thorough response; there are some good recs here. I haven’t read any of Chrysippus, and my knowledge of the Epicureans is limited to their moral philosophy (alongside that of the Stoics). That said, I can’t help but get the feeling you’re negging me a little with the references to skeptics, continentals, and professorial assistance! Fortunately or unfortunately, I’m less a rationalist than my presence here might imply—Bourdieu’s symbolic capital and ethology’s signaling theory are interchangeable in my book. Also fortunately or unfortunately, I’m not a uni student these days, my institutional education concluded a few years back, so I suppose I’ll have to make headway on any texts solo, without professorial help.
A quick meta-note: I think there’s a problem whereby people who study historic philosophy have incentives to steelman their subjects’ ideas and thinking, in order to justify their study. I imagine this claim will be received with some pushback, so I’ll try to break it down to less controversial parts, and we can sum them together. First, I think there are strong incentives in academia for everyone to constantly justify their work. Whether it’s prepping for department socials, getting tenure, applying for grants, or just coming to peace internally with a lifetime dedicated to scholarship, it’s hard to help this subtle narrative of self-justification. Second, I think when we read ancient texts, we’re in a tricky situation. As Wittgenstein once said of Plato,
Perhaps Witt overstates the case, but I feel like we can agree that texts are incredibly “gappy,” as the literary theorist Wolfgang Iser says. That is, so much of texts’ intended meaning resides in metonymic implication, “what can be left unsaid,” contextual situation, etc—and the further we get, culturally and temporally, from these texts, the easier it is to project contemporary schemas onto philosophy past. Not to give you homework, but you may be interested in reading the interview I did with philosopher Jonathan Livengood around the same time I wrote the piece under discussion. We talk a bit about N&S conditions, connections between Plato and positivism, but more relevant to our current discussion, we chatted about secondary sources’ treatment of their subjects. He says:
I think it’s hard to litigate this for specific texts, because of their gappiness. We’ll never know, unless/even if we have rich historiographic knowledge, whether we’re being overly charitable or uncharitable. I do think your Aristotle examples are compelling counter-examples to Yudkowsky’s analysis, but looking at some of the other philosophers you mention as being “woke” on concepts… there I’m a little more skeptical. (Kripke I think we should strike off the list, since he’s very explicitly a Wittgensteinian in thought; ditto with many continentals.)
I think it’s worth re-clarifying what I think the historic blindspots of philosophy have been, and the way I believe a style of inquiry has proven unproductive. I know my original piece is both very long, by online standards, and not especially clear structurally.
Essentially, I think that most philosophical projects which fail to appreciate the Wittgensteinian “words don’t work that way” lesson will end up doing lexicographic work, not philosophy. My claim is that, with a concept like “causality” or “justice” or “beauty” (there are dozens of equally contested terms, historically), there is no “there” there. Rather, there are a multitude of continuous, analogically and historically related phenomena which are “close enough” in various ways that, with some extra specification via contextual use, these handles are pragmatically useful. If one seeks to analyze the natural language concept “causality” or “justice” or “beauty” by finding commonalities between the natural language meanings, they will end up doing primarily historic, cultural, and lexicographic work, because these word-bundles are in no way atomic, they are in no way essential. In another culture, or another language, there might be twelve types of causality or justice or beauty. They might conflate justice and beauty as a single term. How, then, does it make any sense to treat these, implicitly, as if they were natural kinds, that is, to look (as many 20th C philosophers do), for an explanation of causality that is robust to all native-English usages, but also has some deep underlying quasi-essence which can be singularly studied, analyzed, and understood? Philosophers in the know today will readily admit there are no natural kinds—species were the last example to cling to, and speciation is very messy and socially constructed, as any undergrad biologist knows. There are only continuities, at least at levels higher than particles, because the world is incredibly complex, and the possible arrangements of matter functionally infinite. (I know very little about physics here, so excuse any ignorance.) Our concept of causality, as Livengood talks about in the interview, is tied up in a long cultural history of moral judgments and norms, in folk theories and historically contingent metaphors. It is not a single coherent “thing.” And its bounds do not relate to intrinsic material forces so much as they do human use. Native speakers will attribute causality in a way that is pragmatic, functional, and social.
In other words, natural language is near-useless, and often counterproductive, in trying to understand natural territories. Until recently, we might remember, plant and animal species were classified by their value to humans—poisonous vs medicinal plants, edible vs nonedible, tame vs wild animals, noble vs base beasts, etc. Imagine, now, a natural philosopher attempting to hash out a concise and robust definition of “noble animals,” separate from a nominalist thread like “they’re all described as noble by humans,” as if there were some property inherent to these organisms, separate from their long cultural and historic understanding by humans. Such a philosopher would find out, perhaps, a bit about human beings, but almost nothing worthwhile about the animals.
This is the situation I see with conceptual analysis. Natural language is a messy, bottom-up taxonomy built around pragmatic functionality, around cultural and social coordination, around human life. Conceptual analysis acts as if there is a “there” there—as if there were some essence of “justice” or “causality” that maps closely to the human concept and yet exists separate from human social and cultural life. I submit there is not.
(These folk might quibble they don’t believe in essences, but as I remark to Jon, my opinion here is that “a classical account of concepts as having necessary and sufficient criteria in the analytic mode is in some way indistinguishable from the belief in forms or essences insofar as, even if you separate the human concept from the thing in the world, if you advance that the human concept has a low-entropy structure which can be described elegantly and robustly, you’re essentially also saying there’s a real structure in the world which goes with it. If you can define X, Y, & Z criteria, you have a pattern, and those analyses assume, if you can describe a concept in a non-messy way, as having regularity, then you’re granting a certain Platonic reality to the concept; the pattern of regularity is a feature of the world.”)
We might consider the meaning of textual “meaning.” It can refer to an author’s intention, or a reader’s interpretation. It can refer to a dictionary definition, or the effect of a cause. All these are present in our language. Literary theorists spent the 20th century arguing over whether meaning just “is” unknowable author intention or diverse reader interpretation or some formal, inherent thing inside a text. (This last position is absurd and untenable, but we’ll set that aside for now.) This “debate” strikes me as a debate not over the world, or the territory, or the nature of reality, but over whether one sense of a term ought to be standard or another. It is fundamentally lexicographic. There are many valuable insights tucked into these incessant theoretical debates, but they suffer from residing inside a fundamentally confused frame. There is no reason for one singular definition of “meaning” to exist; “words don’t work that way.” Many senses have been accumulated, like a snowball, around some initial core. The field ought, in my opinion, to have separated authorially intended meaning from reader-interpreted meaning, called them different terms, and called it a day. I say “ought”—why? On what grounds? Because, while in everyday linguistic use, a polysemous “meaning” might be just fine & functional, within the study of literature, separating intent from interpretation is crucial, and having diverse schools who use the term “meaning” in radically different ways only breeds confusion & unproductive disagreement. It is hard for me to understand why philosophers would ever approach the “causality” bundle as a whole, when it is clearly not in any way a singular concept.
I know many philosophers have attempted to carve up terms more technically, in ways more pragmatically suited to the kinds of inquiries they want to make (Kevin Scharp on truth comes to mind), but many, historically, have not.
Second, any philosopher who takes edge cases seriously in trying to understand natural language does not understand natural language to begin with. Because our words are functional tools carving up a continuous material space, and not one-to-one references to real, discrete objects with essences, they are optimized for real human situations. Much of the fretting over gendered language, or racial language, comes because there is increasing awareness of “edge cases” or “in betweens” that disrupt our clean binaries. Similarly, Pluto’s ambiguous planet/non-planet status comes because it, and other bodies in our solar system, sits awkwardly between cultural categories. There is no such “thing” as a planet. There are various clusters of atoms floating around, of many different sizes and materials, and we’ve drawn arbitrary lines for functional and pragmatic reasons. The best piece I can recommend on this is David Chapman’s “ontological remodeling” (I quibble with his use of “ontological,” but it’s no matter—it shows how cultural and historical, rather than inherent or natural, the concept of “planet” is.)
I’ll quote the philosopher Marcus Arvan here in the hope of clarifying my own often messy thought:
I understand that this critique is focused on 20th C analytic, and that your comment above is focused more on the ancients. But it seems like big picture, what we’re trying to figure out is, “How well-known are these problems? How widespread are philosophical practices which fall into linguistic pitfalls unwittingly?”
Showing my hand, in the nominalist/conceptualist/realist frame, it seems to me that any frame but nominalism is scientifically untenable. Various cog-sci and psych experiments have, in my opinion, disproven conceptualism, whereas the collapse of natural kinds bars, for those empiricists unwilling to believe in the supersensory realm, realism. I do want to explore nominalism more, and probably should have included at least a paragraph on it in this piece. Many regrets! I believe I felt under-educated on topic at the time of writing, but this is a good reminder to read up. From the secondary sources I’ve come across, it seems like the closest analogue to the emerging modern view of language, universals, natural kinds, abstract entities, etc.
(Sidenote: isn’t Aristotle a realist like Plato? Or at least, in the medieval era his legacy became such? I usually see him pitted against nominalism, as one of the orthodoxies nominalism challenged.)
My big-picture understanding of the philosophical history is that a Platonic realism/formalism outcompeted more nominalist or pragmatic contemporaneous views like those of Protagoras (or perhaps the Epicureans!). The diversity of Greek thought seems incontestable, but the “winners” less so. (It’s not for nothing they say all philosophy is footnotes to Plato.) Realist views go on to dominate Western philosophy up until the medieval era, bolstered by the natural incentives of Christian theology. Nominalism emerges, and claims a non-trivial number of philosophers, but never fully replaces more realist, analytic, or rationalist viewpoints. (I include rationalism because the idea of a priori and analytic both, IMO, are fatally undermined by nominalism + the messiness of natural language.) American pragmatism strikes hard against the Hegelian rationalisms of its day, but regrettably makes little long-term impact on analytic. Similarly, Wittgenstein’s warnings are largely ignored by the analytic community, which continues on with conceptual analysis into the present day, as if nothing was the matter with their methods and puzzle-like riddles. (The continentals, for all their problems, did take seriously Wittgenstein’s critique. Foucault’s Archaeology of Knowledge, or Lyotard’s examination of language games, or Bourdieu’s dismissal of essentialism, each come to mind.) I am curious if you’d contest this.
I am still trying to understand why the linguistic critiques of such riddles and paradoxes, by a philosopher as well-known and widely read as Wittgenstein, have not more widely impacted the academic philosophy community. It seems you’re on my side on this one, the issues with contemporary academic philosophy, so allow me to quote some speculation you might find interesting. The first cause is likely self-selection out: whereof one cannot speak, thereof one must be silent. And so it goes with graduate students pilled on later Witt. Second are problems of selection proper: knowledge regimes, and their practitioners who have invested lifetimes in them, do not cede their own follies lightly. Meanwhile, they continue to select students who confirm, rather than challenge, their own intellectual legacies—both unconsciously, because of course they believe their intellectual legacies are more correct or important, and consciously:
Still, when I learned that philosophers today still take seriously one anothers’ intuitions (and about bizarre, other-worldly counterfactuals) as sources of knowledge about reality, I realized that inexplicable amounts of folly can persist in disciplines. Alas.
Regarding law, that is indeed a good example of counterfactuals shaping language, though I’m not sure how much legal definitions filter into mainstream usage. Either way, legal language really is such a rich area of discussion. Textualist views, which I would previously have dismissed as naive—”there’s no inherent or objective meaning in the words, man! Meanings drift over time!”—have some compelling pragmatic arguments behind them. For one, a Constitutional provision or Congressional law is not the product of a single designer, with a singular spirit of intent, but rather the result of a dynamic process within a committee of rivals. A bill must pass both chambers of Congress and then the Executive chair; at each stage, there will be voters or drafters with very different intentionalities or interpretations of the wording of the law being passed. Textualism, in this frame, is a pragmatic avoidance of this chaotic, distributed intentionality in favor of the one common source of truth: the actual letter of law as written and passed. How can we meaningfully speculate, in such a system, what Congress “intended,” when the reality is a kludge of meanings and interpretations loosely coordinated by the text-at-hand? A second case for textualism is that is prevents bad incentives. If a lawmaker or coalition of lawmakers can create a public impression of the intent, or spirit, of a law, which exists separate from the actual impressions of the voting and drafting representatives, and this intent or spirit is used in court cases, an incentive is created for strategic representation of bills in order to sway future court cases. Third, a textualist might appeal to public transparency of meaning, in the vein of the Stele of Hammurabi. A population must be able to transparently know the rules of the game they are playing. Oliver Wendell Holmes: “We ask, not what this man meant, but what those words would mean in the mouth of a normal speaker of English, using them in the circumstances in which they were used … We do not inquire what the legislature meant; we ask only what the statutes mean.” How they are understood is, from this perspective, more important than the intent—since individuals will act according to the law as understood (and not as intended).
These are the steelmen of textualism—look what happens, however, when it’s applied naively:
How did the court come to this decision?
A man was sentenced to decades in prison over this. The truth is that Congress almost certainly did not intend to write legislation in which selling five doses of sugar-cube LSD resulted in a higher sentence than 200k pure doses. The situation eerily echoes philosophical discourses I’ve come across. Chalmers, for instance, looking up “engineering” in the dictionary in order to figure out the solution to analytic’s problems is not nearly as harmful as the Marshall ruling. But it equally confused. The map is not the territory, as LessWrongers are fond of saying—and justice is not found in the dictionary.
Apologies for the wall of text.
“Puzzle-playing” reminds me of Kuhn’s The Structure of Scientific Revolutions :
https://samzdat.com/2018/05/19/science-under-high-modernism/
So, that’s just academia for you, except it might be worse in the Philosophy department, for all the reasons that you outline ?
Hmmm, after giving it a day, I feel like I may have unfairly or unproductively bombarded you here, so know I won’t be offended if I don’t get a response.
I’ll try to read some of the recommendations, and perhaps in a while I can come back to this conversation with more of value to contribute.