Thanks, I picked the name myself. This is a new account because I haven’t commented before, but I’m long familiar with this community and its thought—and its norms. Given those norms, I probably should have cooled off a bit before posting that comment. Let me try again. I apologize in advance for the length of the below, but charity takes more work and therefore more words.
Fairness to the Ancients
I think we’re talking past one another. Plato was definitely a Platonist, and he definitely employed counterfactual reasoning. Congratulations to your Ancient Phil professor on achieving tenure; I studied under others (I won’t say who or where for privacy reasons), and they likewise taught that Plato believed in essences. I was not trying to imply that I think otherwise. I simply don’t think that the thing Eliezer attacked in “A human’s guide to words” was, in fact, Platonism; I think it was a straw man. And I took you to also be putting up that straw man, and associating it with all philosophers before Wittgenstein.
You did not cite Aristotle; I brought him in because you cited “a human’s guide to words” as a paradigmatic example of a good argument against “Platonic essences.” And yet, that sequence is not really arguing against Platonic essences, it’s arguing against misapplying Aristotelean syllogism. Eliezer attacks the idea that the logical validity of “Socrates is a man, all men are mortal, therefore Socrates is mortal” entitles you to conclude things with certainty in the real world. Eliezer attributes that view to “the Greek philosophers”, calling them “fond of certainty.” He ridicules this view often throughout the sequence. I think the passage I quoted in my original comment shows this to be a straw man of (among others) Aristotle. Aristotle acknowledges that when your premises are uncertain, your conclusions will be too; and that seeking certainty about uncertain or ill-defined concepts is a fool’s errand. For that matter, I would say every Greek philosopher I am aware of would have acknowledged this, and many wrote about the problem!
The other citation that seemed unduly dismissive of the ancients was your citation to Bishop as saying that philosophers “aprioristically” reasoned from their armchairs prior to the 1900s. For the life of me, I can’t find that in Bishop 1992 (ctrl+F “aprior” and “armchair”, 0 results); if you can cite more specifically, I would appreciate it. I would almost certainly have qualms with any assertion of his saying “[X idea] wasn’t considered before [Y date]”, if he did in fact say anything along those lines.
I definitely agree that Plato was a Platonist; I’m not going against philosophical consensus on that front. What I took you to be doing was taking the label “platonism”, attaching it to Eliezer’s straw man, and then saying that philosophers prior to 1900 all believed it and therefore have nothing to contribute.
I took you to agree with Eliezer because you cited him, and I really strongly dislike his mischaracterization of Aristotle, and even further dislike the fact that he takes that view and attributes it to “the Greeks”, whom he slurs together. I took you to be reproducing that straw man, attaching the name “platonism” to it, and generalizing that view to an even wider range of philosophers who endorsed nothing like it. I still think the article as written can create that impression, but it sounds like that wasn’t your intent, and I’m sorry for jumping the gun into what amounted to an attack on your intelligence.
I’ll stand by my assertion that “a human’s guide to words” straw mans the ancients. Again, virtually none of the Greeks agreed with the view he attributed to them, and for that matter, attributing just about anything to “the Greeks” is bound to be wrong, given the vast differences between the diverse thinkers in the ancient Hellenistic world. I took my irritation at Eliezer’s ignorance about the ancients, unfairly assumed you agreed in full with his assessments and characterizations because of your citation of that sequence, and extended that irritation towards you, thinking to myself, “as a philosopher, this person should know better!”
Points for Further Discussion
Finally, I want to thank you for taking the time to write a response to an ill-tempered crank; I hope I’ve acquit myself honorably enough in this follow-up to receive another. If you’d like to continue the conversation to more productive discussion of conceptual engineering itself, rather than disputing the ancients, I’d be interested to hear your thoughts on the following propositions (which are, of course, derived directly from ancient thinking):
Counterfactual reasoning (/”Conceptual Analysis”) is the primary tool which has been used to demonstrate the vagueness of concepts, so disposing of it is dangerous to any project which is premised on the vagueness of concepts. It is one extremely useful tool (among others) for engineering and streamlining useful conceptual frameworks which align well with language.
A good account of concepts should include how concepts change. For better or for worse, concepts change when people argue about them—often counterfactually. This means that a project which sets out to understand concepts, but neglects to include counterfactual reasoning as an element of the project, may run into some very hard times very fast. “Conceptual engineering,” as laid out in the article above, is not (yet?) equipped with the necessary tools for this.
Thanks for the thorough reply! This makes me want to read Aristotle. Is the Nichomachean preface the best place to start? I’ll confess my own response here is longer than ideal—apologies!
Protagoras seems like an example of a Greek philosopher arguing against essences or forms as defined in some “supersensory” realm, and for a more modern understanding of concepts as largely carved up by human need and perception. (Folks will often argue, here, that species are more or less a natural category, but species are—first—way more messy constructed than most people think even in modern taxonomy, second, pre-modern, plants were typically classed not first and foremost by their effects on humans—medicine, food, drug, poison.) Still, it’s hard to tell from surviving fragments, and his crew did get run out of town...
I say:
> For a while, arguably until Wittgenstein, philosophy had what is now called a “classical account” of concepts as consisting of “sufficient and necessary” conditions. In the tradition of Socratic dialogues, philosophers “aprioristically” reasoned from their proverbial armchairs
Do you think it would be more fair to write “philosophy [was dominated by] what is now called a classical account”? I’d be interested to learn why the sufficient & necessary paradigm came to be called a classical account, which seems to imply broader berth than Plato alone, but perhaps it was a lack of charity toward the ancients? (My impression is that the majority of modern analytic is still, more or less, chugging ahead with conceptual analysis, which, even if they would disavow sufficient and necessary conditions, seems more or less premised on such a view—take a Wittgensteinian, family resemblance view and the end goal of a robust and concise definition is impossible. Perhaps some analytic still finds value in the process, despite being more self-aware about the impossibility of some finally satisfying factoring of a messy human concept like “causality” or “art”?) One other regret is that this piece gives off the impression of a before/after specific to philosophy, whereas the search for a satisfying, singular definition of a term has plagued many fields, and continues to do so.
Like I said, I haven’t read Aristotle, but Eliezer’s claim seems at most half-wrong from a cursory read of Wikipedia and SEP on “term logic.” Perhaps I’m missing key complications from the original text, but was Aristotle not an originator of a school of syllogistic logic that treated concepts somewhat similarly to the logical positivists—as being logically manipulable, as if they were a formal taxonomy, with necessary and sufficient conditions, on whom deduction could be predicated? I’ve always read those passages in HGtW as arguing against naive definition/category-based deduction, and for Bayesian inference or abduction. I also must admit to reading quite a bit of argument-by-definition among Byzantine Christian philosophers.
Frustratingly, I cannot find “aprioristically” or “armchair” in Bishop either, and am gonna have to pull out my research notes from the archive. It is possible the PDF is poorly indexed, but more likely that line cites the wrong text, and the armchair frame is brought up in the Ramsey paper or similar. I’ll have to dive into my notes from last spring. Bishop does open:
> Counterexample philosophy is a distinctive pattern of argumentation philosophers since Plato have employed when attempting to hone their conceptual tools… A classical account of a concept offers singly necessary and jointly sufficient conditions for the application of a term expression that concept. Probably the best known of these is the traditional account of knowledge, “X is knowledge iff X is a justified true belief.” The list of philosophers who have advanced classical accounts… would not only include many of the greatest figures in the history of philosophy, but also highly regarded temporary philosophers.
This is not, however, the same as saying that it was the only mode across history, or before Wittgenstein—ceded.
Glad to step away from the ancients and into conceptual engineering, but I’d love to get your take on these two areas—Aristotle’s term logic, and if there are specific pre-moderns you think identify and discuss this problem. From your original post, you mention Kripke, Kant, Epictetus. Are there specific texts or passages I can look for? Would love to fill out my picture of this discourse pre-Wittgenstein.
On the conceptual analysis/engineering points:
1. I have wondered about this too, if not necessarily in my post here then in posts elsewhere. My line of thought being, “While the ostensible end-goal of this practice, at least in the mind of many 20th C practitioners—that is, discovering a concise definition which is nonetheless robustly describes all possible instances of the concept which a native speaker would ascribe—is impossible (especially when our discourse allows bizarre thought experiments a la Putnam’s Twin Earth…), nonetheless, performing the moves of conceptual analysis is productive in understanding the concept space. I don’t think this is wrong, and like I semi-mentioned above, I’m on your side that Socrates may well have been in on the joke. (“Psych! There was no right answer! What have you learned?”) On the other hand, having spent some time reading philosophers hand-wringing over whether a Twin Earth-type hypothetical falsifies their definition, and they ought to start from scratch, it felt to me like what ought to have been non-problems were instead taking up enormous intellectual capital.
If you take a pragmatist view of concepts as functional human carvings of an environment (to the Ancients, “man is the measure of all things”), there would be no reason for us to expect our concepts’s boundaries and distinctions to be robust against bizarre parallel universe scenarios or against one-in-a-trillion probabilities. If words and concepts are just a way of getting things done, in everyday life, we’d expect them to be optimized to common environmental situations and user purposes—the minimum amount of specification or (to Continentals) “difference” or (to information theory) “information.”
I’m willing to cede that Socrates may have effectively demonstrated vagueness to his peers and later readers (though I don’t have the historical knowledge to know; does anyone?) I also think it’s probably true that a non-trivial amount of insight has been generated over many generations of conceptual analysis. But I also feel a lot of insight and progress has been foreclosed on, or precluded, because philosophers felt the need to keep quibbling over the boundaries of vagueness instead of stopping and saying, “Wait a second. This point-counterpoint style of definitions and thought experiments is interminable. We’ll never settle on a satisfying factoring that solves every possible edgecase. So what do we do instead? How do we make progress on the questions we want to make progress on, if not by arguing over definitions?” I think, unfortunately, a functionalist, pragmatist approach to concepts hasn’t been fleshed out yet. It’s a hard problem, but it’s important if you want to get a handle on linguistic issues. You can probably tell from OP that I’m not happy with a lot of the conceptual engineering discourse either. Many of it is fad-chasing bandwagoners. (Surprise surprise, I agree!) Many individuals seem to fundamentally misunderstand the problem—Chalmers, for instance, seems unable to perform the necessary mental switch to an engineer’s mindset of problem-solving; he’s still dwelling in definitions and “object-oriented,” rather than “functionalist” approaches—as if the dictionary entry on “engineering” that describes it as “analyzing and building” is authoritative on any of the relevant questions. Wittgenstein called this an obsession with generalizing, and a denial of the “particulars” of things. (Garfinkel would go on to talk at length about the “indexicality” or particulars.) Finding a way to deal with indexicality, and talk about objects which are proximate in some statistical clusterspace (instead of by sufficient and necessary models), or to effectively discuss “things of the same sort” without assuming that the definitional boundaries of a common word perfectly map to “is/is not the same sort of thing,” are all important starts.
2. I can’t agree more that “a good account of concepts should include how concepts change.” But I think I disagree that counterfactual arguments are a significant source of drift. My model (inspired, to some extent, by Lakoff and Hofstadter) is that analogic extension is one of the primary drivers of change: X encounters some new object or phenomenon Y, which is similar enough to an existing concept Z such that, when X uses Z to refer to Y, other individuals know what X means. I think one point in support of this mechanism is that it clearly leads to family-resemblance style concepts—“well, this activity Y isn’t quite like other kinds of games, it doesn’t have top-down rules, but if we call it a game and then explain there are no top-down rules, people will know what we mean.” (And hence, Calvinball was invented.) This is probably a poor example and I ought to collect better ones, but I hope it conveys the general idea. I see people saying “oh, that Y-things” or “you know that thing? It’s kinda like Y, but not really?” Combine this analogic extension with technological innovation + cultural drift, you get the analogic re-application of terms—desktop, document, mouse, all become polysemous.
I’m sure there are at least a couple other major sources of concept drift and sense accumulation, but I struggle to think of how often counterfactual arguments lead to real linguistic change. Can you provide an example? I know our culture is heavily engaged in discourses over concepts like “woman” and “race” right now, but I don’t think these debates take the character of conceptual analysis and counterfactuality so much as they do arguments of harm and identity.
Thanks for the reply. I’ll try to reply comprehensively, sorry if I miss anything. To start with—Aristotle.
What Aristotle Taught
Was Aristotle not an originator of a school of syllogistic logic that treated concepts somewhat similarly to the logical positivists?
I’m going to break this into two parts—the part about logic, and the part about concepts. Logic first. Aristotle indeed wrote six works on logic and reasoning, which are most often collectively called the Organon. Most of it is developing a valid system of syllogistic logic. The really nice part about syllogistic logic is that correct syllogisms are indisputably valid (but not indisputably sound). Aristotle is totally clear about this. He showed—correctly—that logic, correctly applied, makes your conclusions as true as your premises (i.e. logic is valid); but that alone still doesn’t entitle you to certainty about your conclusions, as you can’t trust your premises any more than you could from the start (i.e., validity is not soundness).
In The Parable of Hemlock, ctrl+F “the Greeks.” Eliezer’s issue isn’t with syllogism. It’s with something different: the assertion that “all men are mortal” by definition. Aristotle says nothing of the sort, least of all in the Organon; he just uses the statement as a hypothetical premise to demonstrate the form of valid syllogism, the same way you might use a sample like “all frogs are green, Harold is a frog, Harold is green” as a lesson of validity in a logic class, regardless of whether purple dart frogs exist. The text that most clearly shows this is the Topics, where Aristotle characterizes good arguments as constructed by using syllogism (as characterized in the earlier works of the Organon) or enthymematic syllogism, especially when the syllogism begins from established beliefs (endoxa) as premises. Explicitly, these endoxa like “all men are mortal” are not certain or guaranteed to be true; but they are better than wild speculation, especially if you are trying to persuade someone. So Eliezer’s attack on the Greeks is off base, mistaking the assertion of validity for the assertion of soundness.
There’s nothing wrong with syllogistic logic, as long as you don’t make too much of it. Eliezer’s top-line conclusion is that “logic never dictates any empirical question [with certainty]”; I think you would be extremely hard-pressed to find a sentence in Aristotle which disagrees, and Eliezer’s clear imputation that they did disagree is ignorant and uncharitable. Logic is a useful tool for reasoning from premises you are reasonably confident in, to conclusions you can be similarly confident in.
It’s no straw man to say that Aristotle liked logic. The straw-manning comes when Eliezer asserts that “the Greeks” thought you could derive certain empirical truths from logic alone. (Parmenides, Spinoza, and Kant attempted this, but not Plato, Aristotle, or most philosophers.) Rather, Aristotle’s logic is all about taking established pretty-good beliefs (which are not called certain, but are generally acknowledged and are the best we have to work with) and having a sure way to arrive at exactly equally good beliefs. Putting this in writing was an incredibly valuable contribution to philosophy.
Now for the part about concepts. Did Aristotle treat concepts similarly to the logical positivists? Honestly, I think not; my impression is that the average positivist was a nominalist about the question of universals, while the best summary of Aristotle’s view on the topic probably heavily uses the word hylomorphism. It’s kinda his own deal, like how Plato was Platonist. I don’t love Aristotle’s metaphysics, and I think there are powerful skeptical/nominalist critiques of hylomorphism, which is after all a formalist view of one kind or another. But I don’t think Eliezer really advanced them, or understood Aristotle’s (or any Greek’s) phenomenology of concepts at all. For a little taste of how nuanced Aristotle’s thoughts on words and concepts actually were, here’s another bit from the last book of the Organon:
It is impossible in a discussion to bring in the actual things discussed: we use their names as symbols instead of them; and therefore we suppose that what follows in the names, follows in the things as well, just as people who calculate suppose in regard to their counters. But the two cases (names and things) are not alike. For names are finite and so is the sum-total of formulae, while things are infinite in number. Inevitably, then, the same formulae, and a single name, have a number of meanings. [emphasis added]
Relevant Reading (By philosopher)
specific pre-moderns you think identify and discuss this problem.
If we’re discussing the problem of “gee whiz, in what sense do concepts exist and truthfully inhere in an ever-changing world?” Virtually all of them! Here’s a short rogue’s gallery, take your pick if you’re intrigued by one in particular.
Plato: Plato’s answer is formalism. But even (or especially) if you think that’s absurd, his treatment of the question is incredibly valuable. Plato is deeply aware and deeply disturbed by the fact that the world around him is changeable, that appearances and naively-constructed concepts deceive, and that nothing certain can be found in them. And the core of many of his dialogues are devoted to proving exactly that. Take the Theaetetus, where he talks about certain knowledge. Can we get it by sense perception? Not quite, appearances can deceive. What about judgment? Fallibility would indicate no. Is it justified true belief? Perhaps, but “justification” demands prior knowledge of the thing itself, so this is invalid by circularity! Plato strongly hints at his solution of formalism, but to pave the way to it, he demolishes more standard accounts first by trying to prove the slipperiness of ordinary concepts and the inaccessibility of certainty. Skeptical accounts can find a great deal to like. (Ever wonder why J.L. Mackie’s skeptical “argument from queerness” begins as a steadfast defense of Platonism as the only way to objective morality? For generations, skeptics have made hay by starting with Plato’s objections to others, then attacking Plato’s rehabilitative view as the final step of a deflationary account.) Parmenides is also recommended reading, as most of it is criticism of the theory of forms. But it’s not for the faint of heart, you’ll need some really good secondary lit—or far better, a supportive professor to read through it with. Trying to read and understand it by yourself is an aneurysm risk.
Aristotle: Often denser than Plato. But he’s far more methodical and much easier to interpret, since he’s not writing dialogues with Straussian readings or citing myths which he didn’t believe or any of that artistic jazz. The Nicomachean Ethics may be a good place to see him apply his method of discourse about the natural world, but the writings of his that are most relevant to this conversation are definitely Physics and Metaphysics. (fun fact: the field was named for the book; “meta” is just Greek for “after”, so “Metaphysics” just means “after physics”, “more physics,” or maybe “physics 2″.)
Stoics: Chrysippus is your boy here. He is taken to be one of the first nominalists (a general term for one of the most popular non-realist views, i.e., that universal properties are words alone and not things in their own right). https://iep.utm.edu/chrysipp/#H5 has a summary you might like, and it may be the best we can do, since virtually all of Chrysippus’ actual writings are not extant (his views were passed to us by way of others’ summaries of them), and most other Stoics (like Epictetus or Aurelius) spent more time talking about ethics, with physics receiving more of a passing mention.
Epicureans: Really just Epicurus, as his teachings were passed down by Lucretius in De Rerum Natura. Virtually nothing else from this school is extant, but their influence is very significant. Steadfast materialists, atomists, atheists, and hedonists. This community would like their teachings a lot. I’ll take this opportunity to point out a trend which is commonplace throughout ancient philosophy; Epicureanism is atheist, but the text sings paeans to gods, using them as stand-ins for abstract concepts. This is weird, but not at all rare in ancient philosophy. Anytime you see someone invoke a god or a myth, before dismissing it as superstition, see if it’s useful to treat it as metaphor or conjecture instead. Remember that, for all the talk of gods and myths he engaged in, one of Socrates’ two crimes that he was killed for was impiety.
Skeptics: You will agree with these people less than their names imply you will. They thought some weird stuff; Academic or Pyrrhonian, either way it sometimes comes off as worshiping ignorance. In any case, formalists they were not, and their eponymous attitude comes across in their writings, which are very clear that if there are in fact universals, we are either unable to come to know them, or even morally forbidden to try.
Peripatetics, Cynics, Cyrenaics and more: there are so many ancient Greeks. Many of them may not have written anything of value on this question, I can’t say. This is the part where I confess ignorance of and wonder at the true diversity of Ancient Greek thought.
Another big gap in my knowledge is Christian and medieval thought, but I had enough friends who studied it to understand that my received caricatures of it were misplaced. Aquinas apparently contributed things to metaphysics in the vein of Aristotle. Maybe Augustine has dope metaphysics, no idea. God features prominently, of course, so know thyself and whether that’s a turn-off.
Early Moderns: Spinozism is super weird and monist and stuff. Maybe not that. Kantianism is incomprehensible, even in the original German, but if you can find a good professor to walk you through it (preferably in a classroom environment), there is a reason he was so influential. The obvious suggestion is the Critique of Pure Reason, and it is definitely the one that is relevant here. (It’s where the separation of syntheticity from prioricity comes from! I don’t think it’s a good separation, but you will need to understand what it means if you want to understand many metaphysicians after him, most of all Kripke.) I personally like The Critique of Judgment too.
Continentals: Another gap in my knowledge. A friend read a lot of them and said “there’s no there there”, but I would guess that had as much to do with that friend as the writing itself. Another said Hegel is apparently very fun “in the right state of mind” (I think they meant psychedelics. This is not an endorsement of illegal drug use.) As with other categories on this list, I will acknowledge my ignorance of whatever brilliance might be here. For what it’s worth, if you are interested in critiquing the “classical” method of counterfactual reasoning—or reasoning in general—you may find allies here, even if they are strange bedfellows.
Moderns: Jumping right up to the 1900s. Meinong gets a bad rep but I still like him (do square circles exist? Maybe as much as anything else does!) Russell and Wittgenstein, you cited already. Tarski is also a great one, who created a modal logic (“T-schemas” is a search term you can start with) which is intended to be generalizable over different uses of language. Almost certainly has connections to anything philosophy of language-related. I like Carnap a whole lot, and he did a lot of philosophy of science which you may find relevant. I dislike Kripke a lot, but there’s no question that his thought is an intensely relevant to any philosophy which deals directly with the idea of meaning (he doesn’t think it’s a thing, or at least, wants a deflated version of it to be the norm). He took himself to be in the tradition of Wittgenstein.
Counterfactual Reasoning
I really like, and generally agree with, your summary of how edge cases and obtuse counterexamples have pushed people to somewhat absurd conclusions. I’ll provide some pushback, but first let me indulge myself in agreeing, and providing an example. My undergraduate senior paper employed an unfortunately complex variant of the trolley problem (guess how many tracks were involved?) to contest an arcane ethical principle relevant to a facially absurd variant of utilitarianism. It was truly approaching self-parody, and I was well aware, I just wasn’t sure what other topic I had an idea about which would fill enough pages. (funnily enough, I can write more than enough pages on random internet fora, though.)
For all that ethics should be able to provide us with answers, and there should be answers even for corner cases… it is extremely clear to me that academic ethics has gone over the deep end. Ethical views are now defined based on cases which are often so ridiculous that whatever decision one would make in those situations is probably a noncentral example of ethical or unethical behavior. It’s clear enough to me how we got here, given a certain kind of steadfast realism about ethics, and it’s unclear what exact countervailing view I think should prevail… but somewhere, somehow, we have gone wrong.
Is the source of the problem counterfactual reasoning itself? Perhaps a certain too-strong form of it. But I also think that a mature version of “conceptual engineering” would see a lot of it employed.
I’m sure there are at least a couple other major sources of concept drift and sense accumulation, but I struggle to think of how often counterfactual arguments lead to real linguistic change. Can you provide an example?
The example, or family of examples, that I want to give you and propose as an incredibly useful analogy here, definitely one where there are lots of examples of “concept drift and sense accumulation”, is law. It’s not exactly common usage, but legal language has a bunch of desirable features as an analogy here to apply “conceptual engineering” to. The boundaries of initially-vague concepts like “probable cause” or “slander” are often decided based on past definitions and laid-out sets of necessary and sufficient conditions in case law. But they are also subject to shift when corner cases are encountered which clearly do or don’t fall into the category—previous understandings of the necessary and sufficient conditions be damned. Ultimately, the courts converge on definitions that are useful at the very least, and they use a number of methods to do it, counterfactual reasoning and N&S conditions being some of the tools in the toolbox. Do you think law should dispose of those tools, and do you think it would lead to better decisions if they did? My answer is “no”; I think they’re great pragmatic tools in conjunction with other tools; and that makes me think that N&S conditions and counterfactual reasoning aren’t the real problem here. They can be useful ways to engineer concepts, rather than just a destructive way to attack them with corner cases.
Legal language is also nice because it gives us a clear sense of an evaluative objective, a way to “grade” our engineering project—in a word, we might say “justice.” (Meanwhile, to engineer common language, we might grade based on “clarity” or “intersubjectivity”.) When the existing body of rules and conditions still leave room for doubt, we can employ and develop our terminology to produce results that accord with a notion of justice.
I hope you like that proposed application of the theory. Interested to hear your thoughts on whether it’s fitting, or if not, why not.
Appreciate the thorough response; there are some good recs here. I haven’t read any of Chrysippus, and my knowledge of the Epicureans is limited to their moral philosophy (alongside that of the Stoics). That said, I can’t help but get the feeling you’re negging me a little with the references to skeptics, continentals, and professorial assistance! Fortunately or unfortunately, I’m less a rationalist than my presence here might imply—Bourdieu’s symbolic capital and ethology’s signaling theory are interchangeable in my book. Also fortunately or unfortunately, I’m not a uni student these days, my institutional education concluded a few years back, so I suppose I’ll have to make headway on any texts solo, without professorial help.
A quick meta-note: I think there’s a problem whereby people who study historic philosophy have incentives to steelman their subjects’ ideas and thinking, in order to justify their study. I imagine this claim will be received with some pushback, so I’ll try to break it down to less controversial parts, and we can sum them together. First, I think there are strong incentives in academia for everyone to constantly justify their work. Whether it’s prepping for department socials, getting tenure, applying for grants, or just coming to peace internally with a lifetime dedicated to scholarship, it’s hard to help this subtle narrative of self-justification. Second, I think when we read ancient texts, we’re in a tricky situation. As Wittgenstein once said of Plato,
Perhaps Plato is no good, perhaps he’s very good. How should I know? But if he is good, he’s doing something which is foreign to us. We do not understand.
Perhaps Witt overstates the case, but I feel like we can agree that texts are incredibly “gappy,” as the literary theorist Wolfgang Iser says. That is, so much of texts’ intended meaning resides in metonymic implication, “what can be left unsaid,” contextual situation, etc—and the further we get, culturally and temporally, from these texts, the easier it is to project contemporary schemas onto philosophy past. Not to give you homework, but you may be interested in reading the interview I did with philosopher Jonathan Livengood around the same time I wrote the piece under discussion. We talk a bit about N&S conditions, connections between Plato and positivism, but more relevant to our current discussion, we chatted about secondary sources’ treatment of their subjects. He says:
The danger is more on the side of over-interpreting, or being overly charitable to the target. I just wrapped up a grad seminar on the problem of induction, and we were looking at the historical development of the problem of induction from Hume to 1970. As I pointed out, when you look at Hume, Hume’s great, he’s fun to read, but he’s also deeply confused, and you don’t want to do the following, which is a mistake: If you start with the assumption that Hume was just right, and assume that, if you’re seeing an error it must be an error in your interpretation—if that’s your historiographical approach, you’re not going to understand Hume, you’re going to understand this distorted SuperHume, who knows all these things Hume didn’t know, and can respond to subtle distinctions and complaints that someone living now is able to formulate. That’s not Hume! Hume didn’t have an atomic theory, he didn’t know anything about DNA or evolution; there are tons of things that were not on his radar. He’s not making distinctions we’d want him to make, that a competent philosopher today would make. There’s a real danger writing secondary literature, or generating new interpretations. If you want to publish a book on Hume, you need to say something new, a new angle—what’s new and also responsible to what Hume wrote? It ends up doing new philosophy under the guise of history.
I think it’s hard to litigate this for specific texts, because of their gappiness. We’ll never know, unless/even if we have rich historiographic knowledge, whether we’re being overly charitable or uncharitable. I do think your Aristotle examples are compelling counter-examples to Yudkowsky’s analysis, but looking at some of the other philosophers you mention as being “woke” on concepts… there I’m a little more skeptical. (Kripke I think we should strike off the list, since he’s very explicitly a Wittgensteinian in thought; ditto with many continentals.)
I think it’s worth re-clarifying what I think the historic blindspots of philosophy have been, and the way I believe a style of inquiry has proven unproductive. I know my original piece is both very long, by online standards, and not especially clear structurally.
Essentially, I think that most philosophical projects which fail to appreciate the Wittgensteinian “words don’t work that way” lesson will end up doing lexicographic work, not philosophy. My claim is that, with a concept like “causality” or “justice” or “beauty” (there are dozens of equally contested terms, historically), there is no “there” there. Rather, there are a multitude of continuous, analogically and historically related phenomena which are “close enough” in various ways that, with some extra specification via contextual use, these handles are pragmatically useful. If one seeks to analyze the natural language concept “causality” or “justice” or “beauty” by finding commonalities between the natural language meanings, they will end up doing primarily historic, cultural, and lexicographic work, because these word-bundles are in no way atomic, they are in no way essential. In another culture, or another language, there might be twelve types of causality or justice or beauty. They might conflate justice and beauty as a single term. How, then, does it make any sense to treat these, implicitly, as if they were natural kinds, that is, to look (as many 20th C philosophers do), for an explanation of causality that is robust to all native-English usages, but also has some deep underlying quasi-essence which can be singularly studied, analyzed, and understood? Philosophers in the know today will readily admit there are no natural kinds—species were the last example to cling to, and speciation is very messy and socially constructed, as any undergrad biologist knows. There are only continuities, at least at levels higher than particles, because the world is incredibly complex, and the possible arrangements of matter functionally infinite. (I know very little about physics here, so excuse any ignorance.) Our concept of causality, as Livengood talks about in the interview, is tied up in a long cultural history of moral judgments and norms, in folk theories and historically contingent metaphors. It is not a single coherent “thing.” And its bounds do not relate to intrinsic material forces so much as they do human use. Native speakers will attribute causality in a way that is pragmatic, functional, and social.
In other words, natural language is near-useless, and often counterproductive, in trying to understand natural territories. Until recently, we might remember, plant and animal species were classified by their value to humans—poisonous vs medicinal plants, edible vs nonedible, tame vs wild animals, noble vs base beasts, etc. Imagine, now, a natural philosopher attempting to hash out a concise and robust definition of “noble animals,” separate from a nominalist thread like “they’re all described as noble by humans,” as if there were some property inherent to these organisms, separate from their long cultural and historic understanding by humans. Such a philosopher would find out, perhaps, a bit about human beings, but almost nothing worthwhile about the animals.
This is the situation I see with conceptual analysis. Natural language is a messy, bottom-up taxonomy built around pragmatic functionality, around cultural and social coordination, around human life. Conceptual analysis acts as if there is a “there” there—as if there were some essence of “justice” or “causality” that maps closely to the human concept and yet exists separate from human social and cultural life. I submit there is not.
(These folk might quibble they don’t believe in essences, but as I remark to Jon, my opinion here is that “a classical account of concepts as having necessary and sufficient criteria in the analytic mode is in some way indistinguishable from the belief in forms or essences insofar as, even if you separate the human concept from the thing in the world, if you advance that the human concept has a low-entropy structure which can be described elegantly and robustly, you’re essentially also saying there’s a real structure in the world which goes with it. If you can define X, Y, & Z criteria, you have a pattern, and those analyses assume, if you can describe a concept in a non-messy way, as having regularity, then you’re granting a certain Platonic reality to the concept; the pattern of regularity is a feature of the world.”)
We might consider the meaning of textual “meaning.” It can refer to an author’s intention, or a reader’s interpretation. It can refer to a dictionary definition, or the effect of a cause. All these are present in our language. Literary theorists spent the 20th century arguing over whether meaning just “is” unknowable author intention or diverse reader interpretation or some formal, inherent thing inside a text. (This last position is absurd and untenable, but we’ll set that aside for now.) This “debate” strikes me as a debate not over the world, or the territory, or the nature of reality, but over whether one sense of a term ought to be standard or another. It is fundamentally lexicographic. There are many valuable insights tucked into these incessant theoretical debates, but they suffer from residing inside a fundamentally confused frame. There is no reason for one singular definition of “meaning” to exist; “words don’t work that way.” Many senses have been accumulated, like a snowball, around some initial core. The field ought, in my opinion, to have separated authorially intended meaning from reader-interpreted meaning, called them different terms, and called it a day. I say “ought”—why? On what grounds? Because, while in everyday linguistic use, a polysemous “meaning” might be just fine & functional, within the study of literature, separating intent from interpretation is crucial, and having diverse schools who use the term “meaning” in radically different ways only breeds confusion & unproductive disagreement. It is hard for me to understand why philosophers would ever approach the “causality” bundle as a whole, when it is clearly not in any way a singular concept.
I know many philosophers have attempted to carve up terms more technically, in ways more pragmatically suited to the kinds of inquiries they want to make (Kevin Scharp on truth comes to mind), but many, historically, have not.
Second, any philosopher who takes edge cases seriously in trying to understand natural language does not understand natural language to begin with. Because our words are functional tools carving up a continuous material space, and not one-to-one references to real, discrete objects with essences, they are optimized for real human situations. Much of the fretting over gendered language, or racial language, comes because there is increasing awareness of “edge cases” or “in betweens” that disrupt our clean binaries. Similarly, Pluto’s ambiguous planet/non-planet status comes because it, and other bodies in our solar system, sits awkwardly between cultural categories. There is no such “thing” as a planet. There are various clusters of atoms floating around, of many different sizes and materials, and we’ve drawn arbitrary lines for functional and pragmatic reasons. The best piece I can recommend on this is David Chapman’s “ontological remodeling” (I quibble with his use of “ontological,” but it’s no matter—it shows how cultural and historical, rather than inherent or natural, the concept of “planet” is.)
I’ll quote the philosopher Marcus Arvan here in the hope of clarifying my own often messy thought:
I increasingly think — and so do Millikan, Baz, and Balaguer — that [the analytic] approach to philosophy is doubly wrong. First, it is based on a misunderstanding of language. I think Wittgenstein (and Millikan) were both right to suggest that our words (and concepts) have no determinate meaning. Rather, we use words and concepts in fundamentally, irreducibly messy ways — ways that fluctuate from moment to moment, and from speaker/thinker to speaker/thinker. A simpler way to put this is that our concepts — of “free will”, “justice” etc. — are all, in a certain way, defective. There is no determinate meaning to the terms “free will”, etc., and thus philosophical investigation into what “free will” is will be likely to lead, well, almost everywhere. At times, we use “free will” to refer (vaguely) to “reason-responsiveness”, or to “actual choices”, or whatever — but there is no fact of the matter which of these is really free will. Similarly, as Balaguer points out in another paper, there is no fact of the matter whether Millianism, or Fregeanism, or whatever about the meaning of proper names is right. All of these positions are right — which is just to say none of them are uniquely right. We can, and do, use proper names in a myriad of ways. The idea that there is some fact of the matter about what “free will” picks out, or what names mean, etc., all fundamentally misunderstand natural language.
And there is an even deeper problem: all of it is hollow semantics anyway. Allow me to explain. In his paper on compatibilism and conceptual analysis, Balaguer gives the following example. Two psychologists, or linguists, or whatever are trying to figure out what a “planet” is. They then debate to no end whether Pluto is a planet. They engage in philosophical arguments, thought-experiments, etc. They debate the philosophical implications of both sides of the debate (what follows if Pluto is a planet? What follows if it is not?). Here, Balaguer says, is something obvious: they are not doing astronomy. Indeed, they are not really doing anything other than semantics. And notice: there may not be a fact of the matter of what “planet” refers to, and it does not even matter. What matters is not what the concept refers to (what is a planet?), but rather the stuff in the world beyond the concepts (i.e. how does that thing — Pluto — behave? what is its composition? etc.).
I understand that this critique is focused on 20th C analytic, and that your comment above is focused more on the ancients. But it seems like big picture, what we’re trying to figure out is, “How well-known are these problems? How widespread are philosophical practices which fall into linguistic pitfalls unwittingly?”
Showing my hand, in the nominalist/conceptualist/realist frame, it seems to me that any frame but nominalism is scientifically untenable. Various cog-sci and psych experiments have, in my opinion, disproven conceptualism, whereas the collapse of natural kinds bars, for those empiricists unwilling to believe in the supersensory realm, realism. I do want to explore nominalism more, and probably should have included at least a paragraph on it in this piece. Many regrets! I believe I felt under-educated on topic at the time of writing, but this is a good reminder to read up. From the secondary sources I’ve come across, it seems like the closest analogue to the emerging modern view of language, universals, natural kinds, abstract entities, etc.
(Sidenote: isn’t Aristotle a realist like Plato? Or at least, in the medieval era his legacy became such? I usually see him pitted against nominalism, as one of the orthodoxies nominalism challenged.)
My big-picture understanding of the philosophical history is that a Platonic realism/formalism outcompeted more nominalist or pragmatic contemporaneous views like those of Protagoras (or perhaps the Epicureans!). The diversity of Greek thought seems incontestable, but the “winners” less so. (It’s not for nothing they say all philosophy is footnotes to Plato.) Realist views go on to dominate Western philosophy up until the medieval era, bolstered by the natural incentives of Christian theology. Nominalism emerges, and claims a non-trivial number of philosophers, but never fully replaces more realist, analytic, or rationalist viewpoints. (I include rationalism because the idea of a priori and analytic both, IMO, are fatally undermined by nominalism + the messiness of natural language.) American pragmatism strikes hard against the Hegelian rationalisms of its day, but regrettably makes little long-term impact on analytic. Similarly, Wittgenstein’s warnings are largely ignored by the analytic community, which continues on with conceptual analysis into the present day, as if nothing was the matter with their methods and puzzle-like riddles. (The continentals, for all their problems, did take seriously Wittgenstein’s critique. Foucault’s Archaeology of Knowledge, or Lyotard’s examination of language games, or Bourdieu’s dismissal of essentialism, each come to mind.) I am curious if you’d contest this.
I am still trying to understand why the linguistic critiques of such riddles and paradoxes, by a philosopher as well-known and widely read as Wittgenstein, have not more widely impacted the academic philosophy community. It seems you’re on my side on this one, the issues with contemporary academic philosophy, so allow me to quote some speculation you might find interesting. The first cause is likely self-selection out: whereof one cannot speak, thereof one must be silent. And so it goes with graduate students pilled on later Witt. Second are problems of selection proper: knowledge regimes, and their practitioners who have invested lifetimes in them, do not cede their own follies lightly. Meanwhile, they continue to select students who confirm, rather than challenge, their own intellectual legacies—both unconsciously, because of course they believe their intellectual legacies are more correct or important, and consciously:
A friend who was considering applying to graduate school in philosophy once told me that a professor described what the graduate programs are looking for as follows: they want someone who will be able to “push the ball forward.” The professors want to know that their graduate students will engage with the professors’ problems in a productive way, participating in the same problem-solving methods that the professors use — for example, clarifying puzzles by drawing creative new distinctions involving obscure and highly technical philosophical concepts.
Needless to say, if this is the requirement for becoming a professional philosopher, then quite a few kinds of philosophers need not apply. Such as philosophers who ask questions and resist asserting answers, or philosophers who view the adoption of dogmatic philosophical positions as arbitrary and pointless. Oddly enough, any philosopher with the perspicuity to understand the futility of the puzzle-playing philosophers’ methods will probably struggle to be heard and understood in an American philosophy department today, much less employed. In effect, a kind of blindered credulousness is now a prerequisite for entering and rising in a field that is ostensibly defined by its commitment to unrelenting critical inquiry. (src)
Still, when I learned that philosophers today still take seriously one anothers’ intuitions (and about bizarre, other-worldly counterfactuals) as sources of knowledge about reality, I realized that inexplicable amounts of folly can persist in disciplines. Alas.
Regarding law, that is indeed a good example of counterfactuals shaping language, though I’m not sure how much legal definitions filter into mainstream usage. Either way, legal language really is such a rich area of discussion. Textualist views, which I would previously have dismissed as naive—”there’s no inherent or objective meaning in the words, man! Meanings drift over time!”—have some compelling pragmatic arguments behind them. For one, a Constitutional provision or Congressional law is not the product of a single designer, with a singular spirit of intent, but rather the result of a dynamic process within a committee of rivals. A bill must pass both chambers of Congress and then the Executive chair; at each stage, there will be voters or drafters with very different intentionalities or interpretations of the wording of the law being passed. Textualism, in this frame, is a pragmatic avoidance of this chaotic, distributed intentionality in favor of the one common source of truth: the actual letter of law as written and passed. How can we meaningfully speculate, in such a system, what Congress “intended,” when the reality is a kludge of meanings and interpretations loosely coordinated by the text-at-hand? A second case for textualism is that is prevents bad incentives. If a lawmaker or coalition of lawmakers can create a public impression of the intent, or spirit, of a law, which exists separate from the actual impressions of the voting and drafting representatives, and this intent or spirit is used in court cases, an incentive is created for strategic representation of bills in order to sway future court cases. Third, a textualist might appeal to public transparency of meaning, in the vein of the Stele of Hammurabi. A population must be able to transparently know the rules of the game they are playing. Oliver Wendell Holmes: “We ask, not what this man meant, but what those words would mean in the mouth of a normal speaker of English, using them in the circumstances in which they were used … We do not inquire what the legislature meant; we ask only what the statutes mean.” How they are understood is, from this perspective, more important than the intent—since individuals will act according to the law as understood (and not as intended).
These are the steelmen of textualism—look what happens, however, when it’s applied naively:
“Well, what if anything can we judges do about this mess?” Judge Richard Posner asked that question midway through his opinion in United States v Marshall.′
[...]
The issue in Marshall was whether blotter paper impregnated with the illegal drug LSD counts as a “mixture or substance containing” LSD. The question matters because the weight of the “mixture or substance” generally determines the offender’s sentence. A dose of LSD weighs almost nothing compared to blotter paper or anything else that might be used in a similar way (such as gelatin or sugar cubes). If the weight of the medium counts, a person who sold an enormous amount of pure LSD might receive a much lighter sentence than a person who sold a single dose contained in a medium. Also, the per-dose sentences for sales of LSD would bear an arbitrary relationship to the per-dose sentences for sales of other drugs, because the LSD sentences would be, for all practical purposes, a function of the weight of the medium.
[...]
The majority ruling held that blotters were “a mixture or substance containing” LSD, and therefore part of its weight. “Judge Posner’s dissent argued that the “mixture or substance” language should be interpreted not to include the medium, because the majority’s conclusion led to irrational results-indeed results so irrational that they would be unconstitutional if the statute were not construed differently.”
[...]
Treating the blotter paper as a “mixture or substance containing” LSD produces results that are, according to Judge Posner and Justice Stevens, who dissented in Chapman, “bizarre,” “crazy,” and “loony.”″ Selling five doses of LSD impregnated in sugar cubes would subject a person to the ten-year mandatory minimum sentence; selling 199,999 doses in pure form would not.
How did the court come to this decision?
The Supreme Court used dictionaries to define “mixture,” coming to the conclusion that a blotter fit the definition (“a ‘mixture’ may … consist of two substances blended together so that the particles of one are diffused among the particles of the other”) and that this was sufficient for their ruling. And yet, Strauss writes, this dictionary definition has little to do with normal English use of the word mixture, which would never call a water-soaked piece of paper a “mixture” of paper and water, or a piece of paper soaked in salt water and dried, with the salt crystals remaining, a “mixture” of salt.
A man was sentenced to decades in prison over this. The truth is that Congress almost certainly did not intend to write legislation in which selling five doses of sugar-cube LSD resulted in a higher sentence than 200k pure doses. The situation eerily echoes philosophical discourses I’ve come across. Chalmers, for instance, looking up “engineering” in the dictionary in order to figure out the solution to analytic’s problems is not nearly as harmful as the Marshall ruling. But it equally confused. The map is not the territory, as LessWrongers are fond of saying—and justice is not found in the dictionary.
Hmmm, after giving it a day, I feel like I may have unfairly or unproductively bombarded you here, so know I won’t be offended if I don’t get a response.
I’ll try to read some of the recommendations, and perhaps in a while I can come back to this conversation with more of value to contribute.
Thanks, I picked the name myself. This is a new account because I haven’t commented before, but I’m long familiar with this community and its thought—and its norms. Given those norms, I probably should have cooled off a bit before posting that comment. Let me try again. I apologize in advance for the length of the below, but charity takes more work and therefore more words.
Fairness to the Ancients
I think we’re talking past one another. Plato was definitely a Platonist, and he definitely employed counterfactual reasoning. Congratulations to your Ancient Phil professor on achieving tenure; I studied under others (I won’t say who or where for privacy reasons), and they likewise taught that Plato believed in essences. I was not trying to imply that I think otherwise. I simply don’t think that the thing Eliezer attacked in “A human’s guide to words” was, in fact, Platonism; I think it was a straw man. And I took you to also be putting up that straw man, and associating it with all philosophers before Wittgenstein.
You did not cite Aristotle; I brought him in because you cited “a human’s guide to words” as a paradigmatic example of a good argument against “Platonic essences.” And yet, that sequence is not really arguing against Platonic essences, it’s arguing against misapplying Aristotelean syllogism. Eliezer attacks the idea that the logical validity of “Socrates is a man, all men are mortal, therefore Socrates is mortal” entitles you to conclude things with certainty in the real world. Eliezer attributes that view to “the Greek philosophers”, calling them “fond of certainty.” He ridicules this view often throughout the sequence. I think the passage I quoted in my original comment shows this to be a straw man of (among others) Aristotle. Aristotle acknowledges that when your premises are uncertain, your conclusions will be too; and that seeking certainty about uncertain or ill-defined concepts is a fool’s errand. For that matter, I would say every Greek philosopher I am aware of would have acknowledged this, and many wrote about the problem!
The other citation that seemed unduly dismissive of the ancients was your citation to Bishop as saying that philosophers “aprioristically” reasoned from their armchairs prior to the 1900s. For the life of me, I can’t find that in Bishop 1992 (ctrl+F “aprior” and “armchair”, 0 results); if you can cite more specifically, I would appreciate it. I would almost certainly have qualms with any assertion of his saying “[X idea] wasn’t considered before [Y date]”, if he did in fact say anything along those lines.
I definitely agree that Plato was a Platonist; I’m not going against philosophical consensus on that front. What I took you to be doing was taking the label “platonism”, attaching it to Eliezer’s straw man, and then saying that philosophers prior to 1900 all believed it and therefore have nothing to contribute.
I took you to agree with Eliezer because you cited him, and I really strongly dislike his mischaracterization of Aristotle, and even further dislike the fact that he takes that view and attributes it to “the Greeks”, whom he slurs together. I took you to be reproducing that straw man, attaching the name “platonism” to it, and generalizing that view to an even wider range of philosophers who endorsed nothing like it. I still think the article as written can create that impression, but it sounds like that wasn’t your intent, and I’m sorry for jumping the gun into what amounted to an attack on your intelligence.
I’ll stand by my assertion that “a human’s guide to words” straw mans the ancients. Again, virtually none of the Greeks agreed with the view he attributed to them, and for that matter, attributing just about anything to “the Greeks” is bound to be wrong, given the vast differences between the diverse thinkers in the ancient Hellenistic world. I took my irritation at Eliezer’s ignorance about the ancients, unfairly assumed you agreed in full with his assessments and characterizations because of your citation of that sequence, and extended that irritation towards you, thinking to myself, “as a philosopher, this person should know better!”
Points for Further Discussion
Finally, I want to thank you for taking the time to write a response to an ill-tempered crank; I hope I’ve acquit myself honorably enough in this follow-up to receive another. If you’d like to continue the conversation to more productive discussion of conceptual engineering itself, rather than disputing the ancients, I’d be interested to hear your thoughts on the following propositions (which are, of course, derived directly from ancient thinking):
Counterfactual reasoning (/”Conceptual Analysis”) is the primary tool which has been used to demonstrate the vagueness of concepts, so disposing of it is dangerous to any project which is premised on the vagueness of concepts. It is one extremely useful tool (among others) for engineering and streamlining useful conceptual frameworks which align well with language.
A good account of concepts should include how concepts change. For better or for worse, concepts change when people argue about them—often counterfactually. This means that a project which sets out to understand concepts, but neglects to include counterfactual reasoning as an element of the project, may run into some very hard times very fast. “Conceptual engineering,” as laid out in the article above, is not (yet?) equipped with the necessary tools for this.
Thanks for the thorough reply! This makes me want to read Aristotle. Is the Nichomachean preface the best place to start? I’ll confess my own response here is longer than ideal—apologies!
Protagoras seems like an example of a Greek philosopher arguing against essences or forms as defined in some “supersensory” realm, and for a more modern understanding of concepts as largely carved up by human need and perception. (Folks will often argue, here, that species are more or less a natural category, but species are—first—way more messy constructed than most people think even in modern taxonomy, second, pre-modern, plants were typically classed not first and foremost by their effects on humans—medicine, food, drug, poison.) Still, it’s hard to tell from surviving fragments, and his crew did get run out of town...
I say:
> For a while, arguably until Wittgenstein, philosophy had what is now called a “classical account” of concepts as consisting of “sufficient and necessary” conditions. In the tradition of Socratic dialogues, philosophers “aprioristically” reasoned from their proverbial armchairs
Do you think it would be more fair to write “philosophy [was dominated by] what is now called a classical account”? I’d be interested to learn why the sufficient & necessary paradigm came to be called a classical account, which seems to imply broader berth than Plato alone, but perhaps it was a lack of charity toward the ancients? (My impression is that the majority of modern analytic is still, more or less, chugging ahead with conceptual analysis, which, even if they would disavow sufficient and necessary conditions, seems more or less premised on such a view—take a Wittgensteinian, family resemblance view and the end goal of a robust and concise definition is impossible. Perhaps some analytic still finds value in the process, despite being more self-aware about the impossibility of some finally satisfying factoring of a messy human concept like “causality” or “art”?) One other regret is that this piece gives off the impression of a before/after specific to philosophy, whereas the search for a satisfying, singular definition of a term has plagued many fields, and continues to do so.
Like I said, I haven’t read Aristotle, but Eliezer’s claim seems at most half-wrong from a cursory read of Wikipedia and SEP on “term logic.” Perhaps I’m missing key complications from the original text, but was Aristotle not an originator of a school of syllogistic logic that treated concepts somewhat similarly to the logical positivists—as being logically manipulable, as if they were a formal taxonomy, with necessary and sufficient conditions, on whom deduction could be predicated? I’ve always read those passages in HGtW as arguing against naive definition/category-based deduction, and for Bayesian inference or abduction. I also must admit to reading quite a bit of argument-by-definition among Byzantine Christian philosophers.
Frustratingly, I cannot find “aprioristically” or “armchair” in Bishop either, and am gonna have to pull out my research notes from the archive. It is possible the PDF is poorly indexed, but more likely that line cites the wrong text, and the armchair frame is brought up in the Ramsey paper or similar. I’ll have to dive into my notes from last spring. Bishop does open:
> Counterexample philosophy is a distinctive pattern of argumentation philosophers since Plato have employed when attempting to hone their conceptual tools… A classical account of a concept offers singly necessary and jointly sufficient conditions for the application of a term expression that concept. Probably the best known of these is the traditional account of knowledge, “X is knowledge iff X is a justified true belief.” The list of philosophers who have advanced classical accounts… would not only include many of the greatest figures in the history of philosophy, but also highly regarded temporary philosophers.
This is not, however, the same as saying that it was the only mode across history, or before Wittgenstein—ceded.
Glad to step away from the ancients and into conceptual engineering, but I’d love to get your take on these two areas—Aristotle’s term logic, and if there are specific pre-moderns you think identify and discuss this problem. From your original post, you mention Kripke, Kant, Epictetus. Are there specific texts or passages I can look for? Would love to fill out my picture of this discourse pre-Wittgenstein.
On the conceptual analysis/engineering points:
1. I have wondered about this too, if not necessarily in my post here then in posts elsewhere. My line of thought being, “While the ostensible end-goal of this practice, at least in the mind of many 20th C practitioners—that is, discovering a concise definition which is nonetheless robustly describes all possible instances of the concept which a native speaker would ascribe—is impossible (especially when our discourse allows bizarre thought experiments a la Putnam’s Twin Earth…), nonetheless, performing the moves of conceptual analysis is productive in understanding the concept space. I don’t think this is wrong, and like I semi-mentioned above, I’m on your side that Socrates may well have been in on the joke. (“Psych! There was no right answer! What have you learned?”) On the other hand, having spent some time reading philosophers hand-wringing over whether a Twin Earth-type hypothetical falsifies their definition, and they ought to start from scratch, it felt to me like what ought to have been non-problems were instead taking up enormous intellectual capital.
If you take a pragmatist view of concepts as functional human carvings of an environment (to the Ancients, “man is the measure of all things”), there would be no reason for us to expect our concepts’s boundaries and distinctions to be robust against bizarre parallel universe scenarios or against one-in-a-trillion probabilities. If words and concepts are just a way of getting things done, in everyday life, we’d expect them to be optimized to common environmental situations and user purposes—the minimum amount of specification or (to Continentals) “difference” or (to information theory) “information.”
I’m willing to cede that Socrates may have effectively demonstrated vagueness to his peers and later readers (though I don’t have the historical knowledge to know; does anyone?) I also think it’s probably true that a non-trivial amount of insight has been generated over many generations of conceptual analysis. But I also feel a lot of insight and progress has been foreclosed on, or precluded, because philosophers felt the need to keep quibbling over the boundaries of vagueness instead of stopping and saying, “Wait a second. This point-counterpoint style of definitions and thought experiments is interminable. We’ll never settle on a satisfying factoring that solves every possible edgecase. So what do we do instead? How do we make progress on the questions we want to make progress on, if not by arguing over definitions?” I think, unfortunately, a functionalist, pragmatist approach to concepts hasn’t been fleshed out yet. It’s a hard problem, but it’s important if you want to get a handle on linguistic issues. You can probably tell from OP that I’m not happy with a lot of the conceptual engineering discourse either. Many of it is fad-chasing bandwagoners. (Surprise surprise, I agree!) Many individuals seem to fundamentally misunderstand the problem—Chalmers, for instance, seems unable to perform the necessary mental switch to an engineer’s mindset of problem-solving; he’s still dwelling in definitions and “object-oriented,” rather than “functionalist” approaches—as if the dictionary entry on “engineering” that describes it as “analyzing and building” is authoritative on any of the relevant questions. Wittgenstein called this an obsession with generalizing, and a denial of the “particulars” of things. (Garfinkel would go on to talk at length about the “indexicality” or particulars.) Finding a way to deal with indexicality, and talk about objects which are proximate in some statistical clusterspace (instead of by sufficient and necessary models), or to effectively discuss “things of the same sort” without assuming that the definitional boundaries of a common word perfectly map to “is/is not the same sort of thing,” are all important starts.
2. I can’t agree more that “a good account of concepts should include how concepts change.” But I think I disagree that counterfactual arguments are a significant source of drift. My model (inspired, to some extent, by Lakoff and Hofstadter) is that analogic extension is one of the primary drivers of change: X encounters some new object or phenomenon Y, which is similar enough to an existing concept Z such that, when X uses Z to refer to Y, other individuals know what X means. I think one point in support of this mechanism is that it clearly leads to family-resemblance style concepts—“well, this activity Y isn’t quite like other kinds of games, it doesn’t have top-down rules, but if we call it a game and then explain there are no top-down rules, people will know what we mean.” (And hence, Calvinball was invented.) This is probably a poor example and I ought to collect better ones, but I hope it conveys the general idea. I see people saying “oh, that Y-things” or “you know that thing? It’s kinda like Y, but not really?” Combine this analogic extension with technological innovation + cultural drift, you get the analogic re-application of terms—desktop, document, mouse, all become polysemous.
I’m sure there are at least a couple other major sources of concept drift and sense accumulation, but I struggle to think of how often counterfactual arguments lead to real linguistic change. Can you provide an example? I know our culture is heavily engaged in discourses over concepts like “woman” and “race” right now, but I don’t think these debates take the character of conceptual analysis and counterfactuality so much as they do arguments of harm and identity.
Thanks for the reply. I’ll try to reply comprehensively, sorry if I miss anything. To start with—Aristotle.
What Aristotle Taught
I’m going to break this into two parts—the part about logic, and the part about concepts. Logic first. Aristotle indeed wrote six works on logic and reasoning, which are most often collectively called the Organon. Most of it is developing a valid system of syllogistic logic. The really nice part about syllogistic logic is that correct syllogisms are indisputably valid (but not indisputably sound). Aristotle is totally clear about this. He showed—correctly—that logic, correctly applied, makes your conclusions as true as your premises (i.e. logic is valid); but that alone still doesn’t entitle you to certainty about your conclusions, as you can’t trust your premises any more than you could from the start (i.e., validity is not soundness).
In The Parable of Hemlock, ctrl+F “the Greeks.” Eliezer’s issue isn’t with syllogism. It’s with something different: the assertion that “all men are mortal” by definition. Aristotle says nothing of the sort, least of all in the Organon; he just uses the statement as a hypothetical premise to demonstrate the form of valid syllogism, the same way you might use a sample like “all frogs are green, Harold is a frog, Harold is green” as a lesson of validity in a logic class, regardless of whether purple dart frogs exist. The text that most clearly shows this is the Topics, where Aristotle characterizes good arguments as constructed by using syllogism (as characterized in the earlier works of the Organon) or enthymematic syllogism, especially when the syllogism begins from established beliefs (endoxa) as premises. Explicitly, these endoxa like “all men are mortal” are not certain or guaranteed to be true; but they are better than wild speculation, especially if you are trying to persuade someone. So Eliezer’s attack on the Greeks is off base, mistaking the assertion of validity for the assertion of soundness.
There’s nothing wrong with syllogistic logic, as long as you don’t make too much of it. Eliezer’s top-line conclusion is that “logic never dictates any empirical question [with certainty]”; I think you would be extremely hard-pressed to find a sentence in Aristotle which disagrees, and Eliezer’s clear imputation that they did disagree is ignorant and uncharitable. Logic is a useful tool for reasoning from premises you are reasonably confident in, to conclusions you can be similarly confident in.
It’s no straw man to say that Aristotle liked logic. The straw-manning comes when Eliezer asserts that “the Greeks” thought you could derive certain empirical truths from logic alone. (Parmenides, Spinoza, and Kant attempted this, but not Plato, Aristotle, or most philosophers.) Rather, Aristotle’s logic is all about taking established pretty-good beliefs (which are not called certain, but are generally acknowledged and are the best we have to work with) and having a sure way to arrive at exactly equally good beliefs. Putting this in writing was an incredibly valuable contribution to philosophy.
Now for the part about concepts. Did Aristotle treat concepts similarly to the logical positivists? Honestly, I think not; my impression is that the average positivist was a nominalist about the question of universals, while the best summary of Aristotle’s view on the topic probably heavily uses the word hylomorphism. It’s kinda his own deal, like how Plato was Platonist. I don’t love Aristotle’s metaphysics, and I think there are powerful skeptical/nominalist critiques of hylomorphism, which is after all a formalist view of one kind or another. But I don’t think Eliezer really advanced them, or understood Aristotle’s (or any Greek’s) phenomenology of concepts at all. For a little taste of how nuanced Aristotle’s thoughts on words and concepts actually were, here’s another bit from the last book of the Organon:
Relevant Reading (By philosopher)
If we’re discussing the problem of “gee whiz, in what sense do concepts exist and truthfully inhere in an ever-changing world?” Virtually all of them! Here’s a short rogue’s gallery, take your pick if you’re intrigued by one in particular.
Plato: Plato’s answer is formalism. But even (or especially) if you think that’s absurd, his treatment of the question is incredibly valuable. Plato is deeply aware and deeply disturbed by the fact that the world around him is changeable, that appearances and naively-constructed concepts deceive, and that nothing certain can be found in them. And the core of many of his dialogues are devoted to proving exactly that. Take the Theaetetus, where he talks about certain knowledge. Can we get it by sense perception? Not quite, appearances can deceive. What about judgment? Fallibility would indicate no. Is it justified true belief? Perhaps, but “justification” demands prior knowledge of the thing itself, so this is invalid by circularity! Plato strongly hints at his solution of formalism, but to pave the way to it, he demolishes more standard accounts first by trying to prove the slipperiness of ordinary concepts and the inaccessibility of certainty. Skeptical accounts can find a great deal to like. (Ever wonder why J.L. Mackie’s skeptical “argument from queerness” begins as a steadfast defense of Platonism as the only way to objective morality? For generations, skeptics have made hay by starting with Plato’s objections to others, then attacking Plato’s rehabilitative view as the final step of a deflationary account.) Parmenides is also recommended reading, as most of it is criticism of the theory of forms. But it’s not for the faint of heart, you’ll need some really good secondary lit—or far better, a supportive professor to read through it with. Trying to read and understand it by yourself is an aneurysm risk.
Aristotle: Often denser than Plato. But he’s far more methodical and much easier to interpret, since he’s not writing dialogues with Straussian readings or citing myths which he didn’t believe or any of that artistic jazz. The Nicomachean Ethics may be a good place to see him apply his method of discourse about the natural world, but the writings of his that are most relevant to this conversation are definitely Physics and Metaphysics. (fun fact: the field was named for the book; “meta” is just Greek for “after”, so “Metaphysics” just means “after physics”, “more physics,” or maybe “physics 2″.)
Stoics: Chrysippus is your boy here. He is taken to be one of the first nominalists (a general term for one of the most popular non-realist views, i.e., that universal properties are words alone and not things in their own right). https://iep.utm.edu/chrysipp/#H5 has a summary you might like, and it may be the best we can do, since virtually all of Chrysippus’ actual writings are not extant (his views were passed to us by way of others’ summaries of them), and most other Stoics (like Epictetus or Aurelius) spent more time talking about ethics, with physics receiving more of a passing mention.
Epicureans: Really just Epicurus, as his teachings were passed down by Lucretius in De Rerum Natura. Virtually nothing else from this school is extant, but their influence is very significant. Steadfast materialists, atomists, atheists, and hedonists. This community would like their teachings a lot. I’ll take this opportunity to point out a trend which is commonplace throughout ancient philosophy; Epicureanism is atheist, but the text sings paeans to gods, using them as stand-ins for abstract concepts. This is weird, but not at all rare in ancient philosophy. Anytime you see someone invoke a god or a myth, before dismissing it as superstition, see if it’s useful to treat it as metaphor or conjecture instead. Remember that, for all the talk of gods and myths he engaged in, one of Socrates’ two crimes that he was killed for was impiety.
Skeptics: You will agree with these people less than their names imply you will. They thought some weird stuff; Academic or Pyrrhonian, either way it sometimes comes off as worshiping ignorance. In any case, formalists they were not, and their eponymous attitude comes across in their writings, which are very clear that if there are in fact universals, we are either unable to come to know them, or even morally forbidden to try.
Peripatetics, Cynics, Cyrenaics and more: there are so many ancient Greeks. Many of them may not have written anything of value on this question, I can’t say. This is the part where I confess ignorance of and wonder at the true diversity of Ancient Greek thought.
Another big gap in my knowledge is Christian and medieval thought, but I had enough friends who studied it to understand that my received caricatures of it were misplaced. Aquinas apparently contributed things to metaphysics in the vein of Aristotle. Maybe Augustine has dope metaphysics, no idea. God features prominently, of course, so know thyself and whether that’s a turn-off.
Early Moderns: Spinozism is super weird and monist and stuff. Maybe not that. Kantianism is incomprehensible, even in the original German, but if you can find a good professor to walk you through it (preferably in a classroom environment), there is a reason he was so influential. The obvious suggestion is the Critique of Pure Reason, and it is definitely the one that is relevant here. (It’s where the separation of syntheticity from prioricity comes from! I don’t think it’s a good separation, but you will need to understand what it means if you want to understand many metaphysicians after him, most of all Kripke.) I personally like The Critique of Judgment too.
Continentals: Another gap in my knowledge. A friend read a lot of them and said “there’s no there there”, but I would guess that had as much to do with that friend as the writing itself. Another said Hegel is apparently very fun “in the right state of mind” (I think they meant psychedelics. This is not an endorsement of illegal drug use.) As with other categories on this list, I will acknowledge my ignorance of whatever brilliance might be here. For what it’s worth, if you are interested in critiquing the “classical” method of counterfactual reasoning—or reasoning in general—you may find allies here, even if they are strange bedfellows.
Moderns: Jumping right up to the 1900s. Meinong gets a bad rep but I still like him (do square circles exist? Maybe as much as anything else does!) Russell and Wittgenstein, you cited already. Tarski is also a great one, who created a modal logic (“T-schemas” is a search term you can start with) which is intended to be generalizable over different uses of language. Almost certainly has connections to anything philosophy of language-related. I like Carnap a whole lot, and he did a lot of philosophy of science which you may find relevant. I dislike Kripke a lot, but there’s no question that his thought is an intensely relevant to any philosophy which deals directly with the idea of meaning (he doesn’t think it’s a thing, or at least, wants a deflated version of it to be the norm). He took himself to be in the tradition of Wittgenstein.
Counterfactual Reasoning
I really like, and generally agree with, your summary of how edge cases and obtuse counterexamples have pushed people to somewhat absurd conclusions. I’ll provide some pushback, but first let me indulge myself in agreeing, and providing an example. My undergraduate senior paper employed an unfortunately complex variant of the trolley problem (guess how many tracks were involved?) to contest an arcane ethical principle relevant to a facially absurd variant of utilitarianism. It was truly approaching self-parody, and I was well aware, I just wasn’t sure what other topic I had an idea about which would fill enough pages. (funnily enough, I can write more than enough pages on random internet fora, though.)
For all that ethics should be able to provide us with answers, and there should be answers even for corner cases… it is extremely clear to me that academic ethics has gone over the deep end. Ethical views are now defined based on cases which are often so ridiculous that whatever decision one would make in those situations is probably a noncentral example of ethical or unethical behavior. It’s clear enough to me how we got here, given a certain kind of steadfast realism about ethics, and it’s unclear what exact countervailing view I think should prevail… but somewhere, somehow, we have gone wrong.
Is the source of the problem counterfactual reasoning itself? Perhaps a certain too-strong form of it. But I also think that a mature version of “conceptual engineering” would see a lot of it employed.
The example, or family of examples, that I want to give you and propose as an incredibly useful analogy here, definitely one where there are lots of examples of “concept drift and sense accumulation”, is law. It’s not exactly common usage, but legal language has a bunch of desirable features as an analogy here to apply “conceptual engineering” to. The boundaries of initially-vague concepts like “probable cause” or “slander” are often decided based on past definitions and laid-out sets of necessary and sufficient conditions in case law. But they are also subject to shift when corner cases are encountered which clearly do or don’t fall into the category—previous understandings of the necessary and sufficient conditions be damned. Ultimately, the courts converge on definitions that are useful at the very least, and they use a number of methods to do it, counterfactual reasoning and N&S conditions being some of the tools in the toolbox. Do you think law should dispose of those tools, and do you think it would lead to better decisions if they did? My answer is “no”; I think they’re great pragmatic tools in conjunction with other tools; and that makes me think that N&S conditions and counterfactual reasoning aren’t the real problem here. They can be useful ways to engineer concepts, rather than just a destructive way to attack them with corner cases.
Legal language is also nice because it gives us a clear sense of an evaluative objective, a way to “grade” our engineering project—in a word, we might say “justice.” (Meanwhile, to engineer common language, we might grade based on “clarity” or “intersubjectivity”.) When the existing body of rules and conditions still leave room for doubt, we can employ and develop our terminology to produce results that accord with a notion of justice.
I hope you like that proposed application of the theory. Interested to hear your thoughts on whether it’s fitting, or if not, why not.
Appreciate the thorough response; there are some good recs here. I haven’t read any of Chrysippus, and my knowledge of the Epicureans is limited to their moral philosophy (alongside that of the Stoics). That said, I can’t help but get the feeling you’re negging me a little with the references to skeptics, continentals, and professorial assistance! Fortunately or unfortunately, I’m less a rationalist than my presence here might imply—Bourdieu’s symbolic capital and ethology’s signaling theory are interchangeable in my book. Also fortunately or unfortunately, I’m not a uni student these days, my institutional education concluded a few years back, so I suppose I’ll have to make headway on any texts solo, without professorial help.
A quick meta-note: I think there’s a problem whereby people who study historic philosophy have incentives to steelman their subjects’ ideas and thinking, in order to justify their study. I imagine this claim will be received with some pushback, so I’ll try to break it down to less controversial parts, and we can sum them together. First, I think there are strong incentives in academia for everyone to constantly justify their work. Whether it’s prepping for department socials, getting tenure, applying for grants, or just coming to peace internally with a lifetime dedicated to scholarship, it’s hard to help this subtle narrative of self-justification. Second, I think when we read ancient texts, we’re in a tricky situation. As Wittgenstein once said of Plato,
Perhaps Witt overstates the case, but I feel like we can agree that texts are incredibly “gappy,” as the literary theorist Wolfgang Iser says. That is, so much of texts’ intended meaning resides in metonymic implication, “what can be left unsaid,” contextual situation, etc—and the further we get, culturally and temporally, from these texts, the easier it is to project contemporary schemas onto philosophy past. Not to give you homework, but you may be interested in reading the interview I did with philosopher Jonathan Livengood around the same time I wrote the piece under discussion. We talk a bit about N&S conditions, connections between Plato and positivism, but more relevant to our current discussion, we chatted about secondary sources’ treatment of their subjects. He says:
I think it’s hard to litigate this for specific texts, because of their gappiness. We’ll never know, unless/even if we have rich historiographic knowledge, whether we’re being overly charitable or uncharitable. I do think your Aristotle examples are compelling counter-examples to Yudkowsky’s analysis, but looking at some of the other philosophers you mention as being “woke” on concepts… there I’m a little more skeptical. (Kripke I think we should strike off the list, since he’s very explicitly a Wittgensteinian in thought; ditto with many continentals.)
I think it’s worth re-clarifying what I think the historic blindspots of philosophy have been, and the way I believe a style of inquiry has proven unproductive. I know my original piece is both very long, by online standards, and not especially clear structurally.
Essentially, I think that most philosophical projects which fail to appreciate the Wittgensteinian “words don’t work that way” lesson will end up doing lexicographic work, not philosophy. My claim is that, with a concept like “causality” or “justice” or “beauty” (there are dozens of equally contested terms, historically), there is no “there” there. Rather, there are a multitude of continuous, analogically and historically related phenomena which are “close enough” in various ways that, with some extra specification via contextual use, these handles are pragmatically useful. If one seeks to analyze the natural language concept “causality” or “justice” or “beauty” by finding commonalities between the natural language meanings, they will end up doing primarily historic, cultural, and lexicographic work, because these word-bundles are in no way atomic, they are in no way essential. In another culture, or another language, there might be twelve types of causality or justice or beauty. They might conflate justice and beauty as a single term. How, then, does it make any sense to treat these, implicitly, as if they were natural kinds, that is, to look (as many 20th C philosophers do), for an explanation of causality that is robust to all native-English usages, but also has some deep underlying quasi-essence which can be singularly studied, analyzed, and understood? Philosophers in the know today will readily admit there are no natural kinds—species were the last example to cling to, and speciation is very messy and socially constructed, as any undergrad biologist knows. There are only continuities, at least at levels higher than particles, because the world is incredibly complex, and the possible arrangements of matter functionally infinite. (I know very little about physics here, so excuse any ignorance.) Our concept of causality, as Livengood talks about in the interview, is tied up in a long cultural history of moral judgments and norms, in folk theories and historically contingent metaphors. It is not a single coherent “thing.” And its bounds do not relate to intrinsic material forces so much as they do human use. Native speakers will attribute causality in a way that is pragmatic, functional, and social.
In other words, natural language is near-useless, and often counterproductive, in trying to understand natural territories. Until recently, we might remember, plant and animal species were classified by their value to humans—poisonous vs medicinal plants, edible vs nonedible, tame vs wild animals, noble vs base beasts, etc. Imagine, now, a natural philosopher attempting to hash out a concise and robust definition of “noble animals,” separate from a nominalist thread like “they’re all described as noble by humans,” as if there were some property inherent to these organisms, separate from their long cultural and historic understanding by humans. Such a philosopher would find out, perhaps, a bit about human beings, but almost nothing worthwhile about the animals.
This is the situation I see with conceptual analysis. Natural language is a messy, bottom-up taxonomy built around pragmatic functionality, around cultural and social coordination, around human life. Conceptual analysis acts as if there is a “there” there—as if there were some essence of “justice” or “causality” that maps closely to the human concept and yet exists separate from human social and cultural life. I submit there is not.
(These folk might quibble they don’t believe in essences, but as I remark to Jon, my opinion here is that “a classical account of concepts as having necessary and sufficient criteria in the analytic mode is in some way indistinguishable from the belief in forms or essences insofar as, even if you separate the human concept from the thing in the world, if you advance that the human concept has a low-entropy structure which can be described elegantly and robustly, you’re essentially also saying there’s a real structure in the world which goes with it. If you can define X, Y, & Z criteria, you have a pattern, and those analyses assume, if you can describe a concept in a non-messy way, as having regularity, then you’re granting a certain Platonic reality to the concept; the pattern of regularity is a feature of the world.”)
We might consider the meaning of textual “meaning.” It can refer to an author’s intention, or a reader’s interpretation. It can refer to a dictionary definition, or the effect of a cause. All these are present in our language. Literary theorists spent the 20th century arguing over whether meaning just “is” unknowable author intention or diverse reader interpretation or some formal, inherent thing inside a text. (This last position is absurd and untenable, but we’ll set that aside for now.) This “debate” strikes me as a debate not over the world, or the territory, or the nature of reality, but over whether one sense of a term ought to be standard or another. It is fundamentally lexicographic. There are many valuable insights tucked into these incessant theoretical debates, but they suffer from residing inside a fundamentally confused frame. There is no reason for one singular definition of “meaning” to exist; “words don’t work that way.” Many senses have been accumulated, like a snowball, around some initial core. The field ought, in my opinion, to have separated authorially intended meaning from reader-interpreted meaning, called them different terms, and called it a day. I say “ought”—why? On what grounds? Because, while in everyday linguistic use, a polysemous “meaning” might be just fine & functional, within the study of literature, separating intent from interpretation is crucial, and having diverse schools who use the term “meaning” in radically different ways only breeds confusion & unproductive disagreement. It is hard for me to understand why philosophers would ever approach the “causality” bundle as a whole, when it is clearly not in any way a singular concept.
I know many philosophers have attempted to carve up terms more technically, in ways more pragmatically suited to the kinds of inquiries they want to make (Kevin Scharp on truth comes to mind), but many, historically, have not.
Second, any philosopher who takes edge cases seriously in trying to understand natural language does not understand natural language to begin with. Because our words are functional tools carving up a continuous material space, and not one-to-one references to real, discrete objects with essences, they are optimized for real human situations. Much of the fretting over gendered language, or racial language, comes because there is increasing awareness of “edge cases” or “in betweens” that disrupt our clean binaries. Similarly, Pluto’s ambiguous planet/non-planet status comes because it, and other bodies in our solar system, sits awkwardly between cultural categories. There is no such “thing” as a planet. There are various clusters of atoms floating around, of many different sizes and materials, and we’ve drawn arbitrary lines for functional and pragmatic reasons. The best piece I can recommend on this is David Chapman’s “ontological remodeling” (I quibble with his use of “ontological,” but it’s no matter—it shows how cultural and historical, rather than inherent or natural, the concept of “planet” is.)
I’ll quote the philosopher Marcus Arvan here in the hope of clarifying my own often messy thought:
I understand that this critique is focused on 20th C analytic, and that your comment above is focused more on the ancients. But it seems like big picture, what we’re trying to figure out is, “How well-known are these problems? How widespread are philosophical practices which fall into linguistic pitfalls unwittingly?”
Showing my hand, in the nominalist/conceptualist/realist frame, it seems to me that any frame but nominalism is scientifically untenable. Various cog-sci and psych experiments have, in my opinion, disproven conceptualism, whereas the collapse of natural kinds bars, for those empiricists unwilling to believe in the supersensory realm, realism. I do want to explore nominalism more, and probably should have included at least a paragraph on it in this piece. Many regrets! I believe I felt under-educated on topic at the time of writing, but this is a good reminder to read up. From the secondary sources I’ve come across, it seems like the closest analogue to the emerging modern view of language, universals, natural kinds, abstract entities, etc.
(Sidenote: isn’t Aristotle a realist like Plato? Or at least, in the medieval era his legacy became such? I usually see him pitted against nominalism, as one of the orthodoxies nominalism challenged.)
My big-picture understanding of the philosophical history is that a Platonic realism/formalism outcompeted more nominalist or pragmatic contemporaneous views like those of Protagoras (or perhaps the Epicureans!). The diversity of Greek thought seems incontestable, but the “winners” less so. (It’s not for nothing they say all philosophy is footnotes to Plato.) Realist views go on to dominate Western philosophy up until the medieval era, bolstered by the natural incentives of Christian theology. Nominalism emerges, and claims a non-trivial number of philosophers, but never fully replaces more realist, analytic, or rationalist viewpoints. (I include rationalism because the idea of a priori and analytic both, IMO, are fatally undermined by nominalism + the messiness of natural language.) American pragmatism strikes hard against the Hegelian rationalisms of its day, but regrettably makes little long-term impact on analytic. Similarly, Wittgenstein’s warnings are largely ignored by the analytic community, which continues on with conceptual analysis into the present day, as if nothing was the matter with their methods and puzzle-like riddles. (The continentals, for all their problems, did take seriously Wittgenstein’s critique. Foucault’s Archaeology of Knowledge, or Lyotard’s examination of language games, or Bourdieu’s dismissal of essentialism, each come to mind.) I am curious if you’d contest this.
I am still trying to understand why the linguistic critiques of such riddles and paradoxes, by a philosopher as well-known and widely read as Wittgenstein, have not more widely impacted the academic philosophy community. It seems you’re on my side on this one, the issues with contemporary academic philosophy, so allow me to quote some speculation you might find interesting. The first cause is likely self-selection out: whereof one cannot speak, thereof one must be silent. And so it goes with graduate students pilled on later Witt. Second are problems of selection proper: knowledge regimes, and their practitioners who have invested lifetimes in them, do not cede their own follies lightly. Meanwhile, they continue to select students who confirm, rather than challenge, their own intellectual legacies—both unconsciously, because of course they believe their intellectual legacies are more correct or important, and consciously:
Still, when I learned that philosophers today still take seriously one anothers’ intuitions (and about bizarre, other-worldly counterfactuals) as sources of knowledge about reality, I realized that inexplicable amounts of folly can persist in disciplines. Alas.
Regarding law, that is indeed a good example of counterfactuals shaping language, though I’m not sure how much legal definitions filter into mainstream usage. Either way, legal language really is such a rich area of discussion. Textualist views, which I would previously have dismissed as naive—”there’s no inherent or objective meaning in the words, man! Meanings drift over time!”—have some compelling pragmatic arguments behind them. For one, a Constitutional provision or Congressional law is not the product of a single designer, with a singular spirit of intent, but rather the result of a dynamic process within a committee of rivals. A bill must pass both chambers of Congress and then the Executive chair; at each stage, there will be voters or drafters with very different intentionalities or interpretations of the wording of the law being passed. Textualism, in this frame, is a pragmatic avoidance of this chaotic, distributed intentionality in favor of the one common source of truth: the actual letter of law as written and passed. How can we meaningfully speculate, in such a system, what Congress “intended,” when the reality is a kludge of meanings and interpretations loosely coordinated by the text-at-hand? A second case for textualism is that is prevents bad incentives. If a lawmaker or coalition of lawmakers can create a public impression of the intent, or spirit, of a law, which exists separate from the actual impressions of the voting and drafting representatives, and this intent or spirit is used in court cases, an incentive is created for strategic representation of bills in order to sway future court cases. Third, a textualist might appeal to public transparency of meaning, in the vein of the Stele of Hammurabi. A population must be able to transparently know the rules of the game they are playing. Oliver Wendell Holmes: “We ask, not what this man meant, but what those words would mean in the mouth of a normal speaker of English, using them in the circumstances in which they were used … We do not inquire what the legislature meant; we ask only what the statutes mean.” How they are understood is, from this perspective, more important than the intent—since individuals will act according to the law as understood (and not as intended).
These are the steelmen of textualism—look what happens, however, when it’s applied naively:
How did the court come to this decision?
A man was sentenced to decades in prison over this. The truth is that Congress almost certainly did not intend to write legislation in which selling five doses of sugar-cube LSD resulted in a higher sentence than 200k pure doses. The situation eerily echoes philosophical discourses I’ve come across. Chalmers, for instance, looking up “engineering” in the dictionary in order to figure out the solution to analytic’s problems is not nearly as harmful as the Marshall ruling. But it equally confused. The map is not the territory, as LessWrongers are fond of saying—and justice is not found in the dictionary.
Apologies for the wall of text.
“Puzzle-playing” reminds me of Kuhn’s The Structure of Scientific Revolutions :
https://samzdat.com/2018/05/19/science-under-high-modernism/
So, that’s just academia for you, except it might be worse in the Philosophy department, for all the reasons that you outline ?
Hmmm, after giving it a day, I feel like I may have unfairly or unproductively bombarded you here, so know I won’t be offended if I don’t get a response.
I’ll try to read some of the recommendations, and perhaps in a while I can come back to this conversation with more of value to contribute.
Plato was not a “20th Century “Platonist”″ ?
https://samzdat.com/2018/01/26/platonism-without-plato/