I’m not sure whether this is better-framed as a babble question or a “serious” question, so, treat it as you will.
If you could sit down and train in philosophy the way you train in arts or sports or maths, how would you do it? What’s the Deliberate Practice of philosophy look like? What would a philosophy coach do? Would there be philosophy competitions? What could they look like?
Or, if you think philosophy as it is currently practiced is useless/suboptimal, what might you put in its place?
Bit of a side point: One thing I got from spending a lot of time in the philosophy department was an appreciation for just how differently many philosophers think compared to how I tend to think. I spent a substantial fraction of my time in college trying to get at the roots of those differences—what exactly are the differences, what are the cruxes of the disagreements, and is there any way to show that one perspective is better than another? (And before anyone asks—nope, I still don’t have good answers to any of those.)
It opened my eyes to the existence of entirely different ways of thinking, even if I still have a hard time wrapping my brain around them.
As I said, I still don’t have a great way to even describe the differences very well. But I will at least say that if the kind of Bayesian and/or reductionist thinking that people tend to promote on LessWrong seems essentially right to you and (once you see it) almost common sense, then you should consider spending time talking to traditional non-Bayesian analytic philosophers and try to understand why their arguments feel compelling to them. Or maybe just read a bunch of articles on the Stanford Encyclopedia of Philosophy that describe non-reductionist, non-Bayesian approaches to metaphysics or epistemology. Maybe the discussions there will seem perfectly reasonable to you, but maybe you will discover that there are really smart people who think in fundamentally different ways than you do. I think just the realization of that seems useful.
A friend of mine once suggested a stand-up-comedy type model for philosophy (“stand-up philosophy”). I think this could have some good dynamics. Imagine philosophers competing to blow the minds of the audience and judges.
I like the idea of stand-up philosophy. I worry at the implied goal though; if it’s to blow people’s minds that might create incentives to signal and to pursue goals such as being exciting or original rather than accurate, precise, or convincing.
I agree, but I think there is a place for both fun and seriousness. Also, I personally feel like stand-up comedy sometimes is the best way to get certain serious points across, and I’d expect the same thing to be true of stand-up philosophy. Seems like a happy consequence of babbling.
That’s all up to the judges, audience, and participants. If you took a typical comedy crowd, then of course you’d basically just get comedy out, with maybe a bit of a philosophical twist. If academic philosophers started doing stand-up philosophy with each other, then you’d get something else. LWers would get yet a third thing.
If we assume that the judges are the best we could select, well, then you still get some distortion from the fact that they probably have to judge fairly quickly and will be prone to certain human biases (eg judging attractive people more highly).
I think it’s totally desired to have some originality bias; accuracy, precision, and convincingness are legitimately not worth much without originality. OTOH, yeah, this can train some bad habits.
Most judges (in my very limited knowledge) don’t go completely on gut feeling, but rather, have a rubric. For example, judges might rate contestants on Originality, Accuracy, Precision, and Convincingness and add up the scores. Or subtracting points for each standard bias/fallacy, or whatever. This sort of thing can help avoid overly skewed judging.
Very cool idea! My weak understanding is that this is something like what orators would do historically. Actually I guess there is a spectrum where giving a lecture is on one end and stand-up is on the other. I’m not sure where people fell historically.
I can’t see this being net-positive. Blowing the minds of the general public is more about confidence and presentation then depth.
Oh, it doesn’t have to be the general public. I doubt it would be. You could select good judges.
One thing that served me well, I think, was ambition. Even before I went to college, when I was first falling in love with philosophy, my goal was to solve it. I wanted my worldview to make sense, dammit, and not be full of contradictions and incoherencies and “we don’t ask that question” moments. Most of the philosophers regarded as great (Leibniz, Russell, Lewis, etc.) were grand systematizers who attempted to have an answer to everything, and were not satisfied with paradoxes/contradictions/incoherencies/articles-of-faith anywhere. Alas, no one so far has been able to succeed at this ambition, but nevertheless I think I learned so much more, and came to beliefs that are so much better, because of it.
Why? Well, one reason (though probably not the only reason) is that it helped me prioritize. It helped me move on from interesting-but-not-that-important puzzles and go for the Big Questions. And go straight for the throat instead of lollygagging.
The experience of trying to construct a coherent, non-contradictory complete worldview, only to encounter some new problem or puzzle that brings it crashing down, repeatedly is perhaps analogous to the experience of trying to construct a hypothesis about how celestial bodies move, or how optics works, or something, that can accurately predict all the data from all your experiments.
To turn this into a training technique, we might:
Have a big list of questions, which approximates all the big questions in philosophy.
Try to answer all of them from a plausible, coherent perspective.
Get feedback on how coherent (and plausible) the perspective is, how well-argued the answers were, etc.
Or, perhaps, a courtroom-like examination process where a committee selects a line of questioning? (Roughly, draw some questions randomly off of the Big List to try to catch the student off-guard, and then depending on the student’s answers, go down a line of questioning which best searches for flaws in the view?)
Funnily enough, the medieval Disputations practice seems to have been kinda like this? Not the game version we played, the version that was used for dissertation defenses. IIRC.
Direct and immediate feedback is a good idea. Given your first suggestion, is the assumption that philosophers would attempt to address everything? Many are going to be specializing, and may not have thought much about many central questions. I can tell you a lot about metaethics. I can’t tell you much about metaphysics. Maybe that’s a mistake, but given the current incentive system and the way academic fields are set up, it’d be hard not to narrowly focus in this way.
I was just trying to go off of Daniel K’s prompt, which was precisely for philosophers to try to address everything. I agree that this is not obviously the best route.
For deliberate practice you need feedback that tells you whether or not you are right with what you are doing. The traditional way for that is essay writing with a teacher afterwards giving you critical feedback about what you wrote.
Besides written communication, oral communication can work as well if you are doing it together with skilled people.
I think one of the main problem with philosophy is that it’s too detached from practical concerns. If you want to understand the nature of knowledge and how it’s aquired you should be able to have practical answer to questions such as how our society should go about deciding whether we know that a particular drug works or doesn’t.
Yes, the question is messy but that’s the nature of reality.
I rarely received much in the way of substantive feedback when I was in a philosophy graduate program. This isn’t a critique of professors; they have so many things to do that going through a paper line-by-line to offer feedback is incredibly time consuming. So it’s hard to achieve much of it, in practice, in graduate programs.
There are also norms of politeness that can impede criticism. I’ve been to far too many talks where the audience goes easier on the audience than they would if they really wanted to help the person improve. Then you get echo chambers. If a bunch of people are moral realists or think the only solutions to the hard problem are anti-physicalist or whatever, the only pushback a person presenting is likely to receive is internalistic—that is, it comes from people who share background assumptions that should themselves be consistently challenged.
Overall, I think the institutions we have in place don’t do a good job of providing enough critical feedback if one is studying and presenting on philosophy.
I remember Pirsig from Zen and the Art of Motorcycle maintainance arguing that the way you actually create an enviroment where a lot of feedback happens would be to create a lot of peer feedback. The act of giving feedback and then talking with the original author is also a high feedback activity.
Pirsig also writes that philosophy is not actually taught at universities. Instead, philosophy departments teach “philosophology”, the study of what other philosophers have written.
But his book was nearly fifty years ago. Is that still the state of things?
Pirsig draws a contrast with music students, whose studies consist primarily of developing their skill at their instrument, not musicology, whereas philosophy students never do philosophy at all, only philosophology. This is why a Ph.D. thesis in philosophy typically consists of an exhaustive scholarly history of everything of consequence that has previously been written on its topic, with a single chapter late on setting out the author’s modest contribution, and a few concluding chapters relating it to the history. In science subjects, the thesis sets out primarily what its author has done, and the exhaustive history is replaced by a literature survey of no more than a chapter.
A colleague once showed me a newly completed Ph.D. thesis he had received from its author, which he found rather odd. I looked through it and laughed, because it took exactly the form I recognised from Pirsig. So I don’t think he was caricaturing.
I was in a terminal MA program, and that was very much the case. Entire courses were taught on the works of specific people, e.g. “David Lewis,” and much of the focus was on exegesis of written works, ranging from the Greeks to the mid 20th century. There were certainly exceptions to this, but philosophology was very much alive and well where I was. I don’t know how it is for PhD programs, and I’d bet it varies considerably.
There’s a recent trend towards formal methods, and you’ve had some movements like experimental philosophy that have also deviated from these trends. I myself went into a psychology PhD program since I thought there’d be more tolerance for my empirical approach to philosophy there (I was correct), and because philosophology isn’t my thing. I’ve noticed that almost all of the papers and work I deal with in philosophy was published in the last 30 years or so, with an emphasis on the past 15 years or so. But I’m an advocate of “exophilosophy”: doing philosophy outside the formal academic setting of philosophy departments, so I have limited insight into the state of philosophy PhD programs proper.
Pirsig’s remarks seem a bit pessimistic. I’ve seen plenty of dissertations or articles generated from philosophical work that are very argument-centric. I don’t know what an exhaustive search of the literature would reveal, but people can and do succeed at focusing on doing philosophy and presenting arguments; there isn’t some universal demand that everyone focus mostly on history. I’d like to hear from some people with more direct experience in these programs, though.
That’s a good thing. The closest approximation to philosophical truth we have is the ability to answer all currently known objections.
The contrast with music seems misleading. Almost any other field is full of studying the past! You don’t primarily “learn physics” by going and doing experiments; you mainly learn it by studying what others have already done.
Granted, physicists read new textbooks summarizing the old results, while philosophers more often read the original material. That’s a pretty big difference. However, that might be because philosophy is more directly about the critical thinking skills themselves (hence you want to read how the original philosopher describes their own insight), while physics is more just about the end results of that process.
It is common for phd theses to have a very large literature review. How over-the-top is philosophy in this, really? I would guess that many “humanities” areas are similarly heavier on the lit review. (Although, you could plausibly accuse those areas of the same dysfunction you see in philosophy.)
Very little about the courses I took in philosophy were directly about how to think better. They were much more focused on understanding what some thinker said for the sake of doing so. If the purpose of these courses were to improve critical thinking, I don’t think I benefited much from it, and it’s a strange and roundabout way to pursue the goal. Plus, they almost never collect any actual data on whether these methods work.
My dissertation is in psychology (though it is heavily focused on philosophy as well), so I’m not really sure myself how much is focused on a literature review. Mine is almost entire critical discussion of studies, and is only concerned with studies that go back to 2003, with the bulk focused on 2008 onwards. I’m literally responding to current papers as they come out. So, it’s very recent stuff. I’d be surprised if this weren’t often the case for philosophers as well.
For instance, suppose you were writing in metaethics. You could easily write a dissertation on contemporary issues, such as evolutionary debunking arguments, companions in guilt arguments, phenomenal conservatism, moral progress, or any number of topics, and the bulk of your discussion could focus on papers written in the past 5 years. So, it’s simply not the case that one’s approach to philosophy is that contingent on the past, or an extreme focus on literature reviews.
I went to a relatively backwater undergrad, and personally, I thought the philosophy profs had a big emphasis on thinking clearly. My Epistemology class was reading a bunch of articles (ie the textbook did nothing to summarize results, only presenting the original texts); but, class was all about dissecting the arguments, not regurgitating facts (and only a little about history-of-philosophy-for-history’s-sake).
Side note, the profs I talked to also thought philosophy was pretty useless as a subject (like objectively speaking society should not be paying to support their existence). I think they thought the main saving grace was that it could be used to teach critical thinking skills.
Possibly, this is just very different from grad programs in philosophy.
Well, yeah.
Ah, ok.
The lazy answer is probably “whatever it is that philosophy grad students do.” I honestly think this might be a good question for a zoom call with a bunch of PhD philosophy students.
That’s a reasonable starting point and probably somewhere to look. But I’ve been a philosophy graduate student and I don’t think most of what I did made me any better at philosophy.
Among other issues, my experience with philosophy graduate education is that philosophers almost exclusively spoke to one another and read works of philosophy. This risks developing a narrow and insular conception not only of philosophy but, given the ubiquitous reliance on intuitions, on what “commonsense” is like.
Philosophers will declare something as “obvious” or “intuitive” with little regard for who it is supposed to be obvious or intuitive to; the implicit presumption is that it’d be obvious or intuitive to everyone, or anyone who is thinking correctly, and so on, but little serious consideration is given to the possibility that one’s intuitions may be idiosyncratic and not probative of the world so much as how the individual with the intuition is disposed to think about the world.
In short, a lot of philosophy strikes me as bad psychology, with a sample size of one (yourself) or a handful of idiosyncratic people whose views aren’t even independent of one another because they’re in the same graduate program and all talk to and influence each other.
I’m curious about how one would know one has become better at philosophy? For other skills, our reference point is either an objective metric, or the regard of our colleagues. But for philosophy, there’s no metric, and it sounds like you’re saying that the regard of one’s colleagues is not a very useful signal either. After all, you’re claiming that studying philosophy made you no better at the subject, and seem to be characterizing your experience as typical.
Anyway, this seems like the key problem. How can you know how to improve unless you have a way to define improvement? And yet philosophy resists such a definition perhaps more than anything else. Except in the sense of “being familiar with more and more works that are deemed to be important works of philosophy.” That seems like a place to start.
To some extent, I know I’ve gotten better at philosophy simply by finding that my beliefs have changed, and my new justifications clearly seem much better-grounded than my old. This doesn’t work as a general tool (obviously it overly praises those who come to strong convictions, since they will rate their new beliefs extremely favorably), but it’s far more than nothing.
It seems to me that the regard of colleagues would, actually, be a useful signal as well (even if problematic for similar reasons).
However, I’m far more fond of mathematical philosophy, where it is easier to see whether you’ve accomplished something (have you proven a strong theorem? have you codified useful mathematical structures which capture something important? these are subjective questions, but, less so).
It sounds like you have a pragmatic perspective. By synthesizing several perspectives on philosophical improvement, you can find a more robust measure of your skill. All our suggestions so far might be more powerful in combination. We might measure exposure and command of philosophical texts; an increase in self-perception of having a well-grounded perspective over time; an increase in the regard and, perhaps, status of one’s colleagues; and the provable aspects of one’s output. In combination, these seem like a reasonable aggregate measure of improvement.
It would therefore be interesting to know which of these metrics was not showing improvement in Lance’s grad program. Were the other students failing to achieve influence? Not building a command of previous literature? Perceiving themselves as ever-more-befuddled as they studied more? Working on unprovable problems, or failing to find proofs?
Small, out-of-the-mainstresm bubbles like Rationalism and Objectivism have the same problem, but worse. If the problem is insularity, you can’t fix it with more insularity.
If philosophers would be a bit less shy about their reliance on intuition, perhaps they could openly admit that they are relying on their own personal intuition without projecting it on anyone else. There’s nothing shameful about analyzing one’s personal intuitions, for one’s own benefit and for the benefit of others. For example, I am happy to read someone like Russel or Descartes examining their own intuitions. Someone’s intuitions can be interesting, and can be a source of insight!
But philosophers seem to have a pretty strong tendency to try and sound more authoritative, stating something as a generally-shared intuition.
Philosophers are so un-shy about their use of intuitions that they write books and articles about the subject. https://plato.stanford.edu/entries/intuition/
You know who’s actually shy about their use of intuitions...?
True, but I perceive room for them to be even less shy, and I stand by my earlier speculation. (I’ve read enough philosophy to know what Lance Bush was pointing at.)
You know who’s actually shy about their use of intuitions...? People who are in denial about it.
I often feel like a lot of problems would be solved if we could just make working through Li and Vitanyi a requirement for aspiring philosophers.
There are two things similar to a “philosophy coach” I can imagine:
One would be if someone has thought deeply about something, and you’ve only thought briefly, you could try to express yourself to them and then they could help you quickly learn what avenues of thought are more promising than others, and what path they took to reach where they are.
Another is someone better at philosophy writing than you helping you edit your drafts.
I think a perhaps-more-practical version of “work thru li & vitanyi” is “learn computer science”—eg, the book “logic and computability” might be a good text for philosophers. (It is thorough and technical, but introductory.)
I use algorithmic information theory (in philosophy-esque applications) about 200x more than I use model theory, so I’m not sure it’s more practical not to learn it. Though I agree that you might be able to learn the content-independent “way of thinking” in easier ways.
I just think there’s something really important about the concept of computation, for philosophy. It seems even more important than materialism, in terms of how it shapes thoughts about a variety of subjects. Like, yeah, algorithmic information theory is pretty great, but as a prerequisite you should be thinking of things in terms of computations, and this to me seems like the more important overall insight.
I’m absolutely on board with this, though I doubt it would help much with the kinds of work I do in either field (philosophy or psychology), though maybe not; in the last few years I had no choice but to pick up R and I think I’d be quite a bit further along if I’d taken computer science courses earlier. Even so, there’s still a lot of low hanging fruit for people who work in philosophy or philosophy-adjacent fields to do that probably wouldn’t benefit all that much from understanding computer science. That’s not to say it might not help in some way, but I’m not sure exactly how or if it would be of comparatively high value compared to studying other topics.
I’m mostly referring to the way of thinking where you can think of things in terms of computations. Without this, you might have weird ideas about what the mind can do with information, what can constitute a successful map/territory relationship, etc. Sorry, I’m not being very specific here; I just think there are a ton of philosophical errors which boil down to not understanding computation.
Granted, most of the important points are probably already “in the air” from computers playing such a central role in life and society today. People probably don’t need formal information theory to have good intuitions about what information is, today, compared to in the past. But it probably still helps!
Current professional philosophers seem to be generally too confident about their ideas/positions, and lack sufficient appreciation of how big idea/argument space is and how little of it we’ve explored. I’m not sure if this is a problem with humans, or a problem with how philosophers are trained and/or selected. We should at least make a concerted effort to rule out the latter.
One concrete suggestion is instead of measuring progress (and competing for status, etc.) by how many open problems have been closed (which doesn’t work because we can’t be very sure whether any proposed solution is actually correct), we should measure it by how many previously unsuspected problems have been opened, how many previously unknown considerations have been pointed out, etc. This is already true to some extent, but to have high status, philosophers still seem expected to act as if they’ve solved some important open problems in their field, i.e., to have firm positions and to confidently defend them.
(I’m not sure this is the kind of answer you’re looking for, but I’ve been thinking this for a while and this seems a good chance to write it down.)
I would suggest time thinking about mathematical paradoxes. One advantage of these compared to many philosophical questions is that it is easier to engage in these question from the desire to know the truth, rather than the desire for things to work out a particular way.
Another advantage of exploring paradoxes is that often when you make a mistake, there will be a mathematical proof that you are wrong.
Another suggestion I would make would be to pick the area of philosophy in which you currently feel you can make the most progress on and to leave areas in which you feel more uncertain until later.
I’ve gained a lot of benefit from writing down my ideas. Often arguments that seem waterproof in your head will turn out to have additional assumptions once written down. Often if I come back a few months later, I can see areas where my argument isn’t as strong as it could have been.
I can also see benefits of writing steelmans of other people’s positions.
Being good at communicating philosophical ideas is a separate skill and one I can’t advise you on.
I think the most basic philosophical skill is understanding past arguments. Reading Plato and Aristotle is good practice for this, because their books have a lot of subtle arguments that don’t require much previous tradition. Then maybe you’ll start reading later stuff, making your own arguments and tracing them through the literature, talking to people and so on.
Or if you don’t find the fun in reading all these arguments—then maybe just leave it. The same thing could happen in any other area, like math or physics or programming. Feynman famously said about physics that “it may give some practical results, but that’s not why we do it”. Annie Dillard makes a similar point:
For me personally, some arguments from classical philosophy are fun (learning as recollection and so on), but not so much fun that I’d like to do it as a career.
I think there are very distinct skills in “philosophy”, which have different measures for achievement/skill, and therefore different training regimens. Like most things one might want to get better at, there are specializations to consider. Analogously, you can train in “sports” to a certain level, but beyond that you train in “track and field”, and beyond that “javelin throw”.
So, what do you actually want to get better at, and where are you starting from?
Parts of logical presentation of arguments can be trained as debate, and others as blog posts or published articles.
Philosophical history and comparative studies of populations are probably best practiced in academia.
Actual useful models of human behaviors and justifications that many give for those behaviors—probably best practiced in the doing.
Formal debate is really really terrible in practice as a way to train anything resembling good philosophy, or else I’d think that a pretty good suggestion.
Specifically, all forms of debate devolve into speed-talking contests, because if you make a point that your opponent doesn’t oppose, then they’re considered by the judges to have conceded that point; so you want to make as many points as humanly possible in the time allotted. Aside from that, the game is all about coming up with clever argumentative maneuvers that have little to do with what arguments would work in real life and nothing al all to do with the truth.
Multiple attempted reforms of debate rules to get around these problems have failed, producing essentially the same result.
With respect to the distinct skills in “philosophy”—I’m not so sure. I think maybe philosophy has correctly divided itself into subfields such as ontology, ethics, and epistemology. These subfields address different sets of questions, but use similar/identical “philosophical method” in doing so. This suggests that a common set of skills are involved in many philosophical pursuits, somewhat unlike the sports analogy.
Granted, I do suspect that there’s a list of skills, which might best be trained separately. Here is an attempt to list what they might be:
Question generation.
Maybe just come up with “is a hotdog a sandwich” type questions for lots of everyday concepts?
Hypothesis generation. Given a set of “data” (usually from intuition—eg, cases where someone does, or doesn’t, seem to be behaving morally) generate a hypothesis which fits the data (eg, a theory of morality).
This might be trained by trying to come up with dictionary definitions of foreign words, given only examples.
Another exercise could involve improving dictionary definitions of familiar words.
Counterexample generation: strike down a theory by coming up with a case which clearly goes against it.
Give counterexamples to dictionary definitions.
Counterexamples in mathematics—give negative examples for false conjectures.
Just, like, a whole lot of critiquing each other’s theories.
Argumentation: clearly, precisely, and convincingly express a philosophical view, supporting it with good reasoning and avoiding missteps (eg fallacies).
Training in formal logic and other valid methods of inference such as probability and statistics.
Fallacies and biases.
Lots of writing practice with detailed critiques for clarity, accuracy, and persuasiveness.
The above three skills seem to be a bit overly anchored to a specific way of doing philosophy for my taste, but there you have it.
Yes, I was completely turned off from ‘debate’ as a formal endeavor as a high schooler, despite my love for informal debate.
One of the main problems is that debate contests are usually formulated as zero sum, whereas the typical informal debate I engage in is not.
Do you know of any formats for nonzero sum debate competitions where the competitors argue points they actually believe in? e.g. both debaters get more points if they identify a double-crux, and you win by having more points in the tournament as a whole, not by beating your opponent.
I think there are two parts to being good at philosophy: argumentative skill and cached knowldge.
Cached knowledge is knowing a given topic, the arguments around it and so on. Without cached knowledge you can’t engage in a real discussion because you have to reinvent the wheel while other people are discussion the best design for a car. Getting cached knowledge is largely a matter of reading existing work and discussion with people who know the field
Argumentation is being able to argue well. This means spotting flaws in arguments, being able to distinguish between an argument being true and being important, finding the cruxe(s) of a discussion and so on. This is hard to learn and is more a skill. The best way to learn it in my experience is lots and lots of practice with short feedback cycles and direct, clear feedback. Competitive debating can help. So can the standard route of writing lots of papers and having someone who is good mark them and rip them apart when/where they’re unpersuasive/unclear/imprecise.
I’m worried that competitive debating and argumentation could lead to developing some negative habits.
The ability to adopt a scout mindset, listen to and process opposing views, be receptive to criticism, engage in counterfactual thinking effectively, know how to handle thought experiments, employ intuition and other tools characteristic to philosophy judiciously, be able to switch between level of complexity in speech for different audiences (e.g. avoiding technical jargon with non-specialists, using examples that resonate with the audience, etc.) are all skills that can operate well both within and outside an argumentative context.
While being good at arguing may be the most central skill to cultivate, the specifics are going to matter!
Hard agree with the potential negative effects. Debating is essentially learning to be good at motivated reasoning. That can be very good if you choose to apply said motivated reasoning skill to deeply understand all positions on a topic, even those you disagree with. It’s usually bad because most people just use their superior motivated reasoning skills to engage in confirmation bias more effectivley.