Train Philosophers with Pearl and Kahneman, not Plato and Kant
Part of the sequence: Rationality and Philosophy
Hitherto the people attracted to philosophy have been mostly those who loved the big generalizations, which were all wrong, so that few people with exact minds have taken up the subject.
Bertrand Russell
I’ve complained before that philosophy is a diseased discipline which spends far too much of its time debating definitions, ignoring relevant scientific results, and endlessly re-interpreting old dead guys who didn’t know the slightest bit of 20th century science. Is that still the case?
You bet. There’s some good philosophy out there, but much of it is bad enough to make CMU philosopher Clark Glymour suggest that on tight university budgets, philosophy departments could be defunded unless their work is useful to (cited by) scientists and engineers — just as his own work on causal Bayes nets is now widely used in artificial intelligence and other fields.
How did philosophy get this way? Russell’s hypothesis is not too shabby. Check the syllabi of the undergraduate “intro to philosophy” classes at the world’s top 5 U.S. philosophy departments — NYU, Rutgers, Princeton, Michigan Ann Arbor, and Harvard — and you’ll find that they spend a lot of time with (1) old dead guys who were wrong about almost everything because they knew nothing of modern logic, probability theory, or science, and with (2) 20th century philosophers who were way too enamored with cogsci-ignorant armchair philosophy. (I say more about the reasons for philosophy’s degenerate state here.)
As the CEO of a philosophy/math/compsci research institute, I think many philosophical problems are important. But the field of philosophy doesn’t seem to be very good at answering them. What can we do?
Why, come up with better philosophical methods, of course!
Scientific methods have improved over time, and so can philosophical methods. Here is the first of my recommendations...
More Pearl and Kahneman, less Plato and Kant
Philosophical training should begin with the latest and greatest formal methods (“Pearl” for the probabilistic graphical models made famous in Pearl 1988), and the latest and greatest science (“Kahneman” for the science of human reasoning reviewed in Kahneman 2011). Beginning with Plato and Kant (and company), as most universities do today, both (1) filters for inexact thinkers, as Russell suggested, and (2) teaches people to have too much respect for failed philosophical methods that are out of touch with 20th century breakthroughs in math and science.
So, I recommend we teach young philosophy students:
more Bayesian rationality, heuristics and biases, & debiasing, | less | informal “critical thinking skills”; |
more mathematical logic & theory of computation, | less | term logic; |
more probability theory & Bayesian scientific method, | less | pre-1980 philosophy of science; |
more psychology of concepts & machine learning, | less | conceptual analysis; |
more formal epistemology & computational epistemology, | less | pre-1980 epistemology; |
more physics & cosmology, | less | pre-1980 metaphysics; |
more psychology of choice, | less | philosophy of free will; |
more moral psychology, decision theory, and game theory, | less | intuitionist moral philosophy; |
more cognitive psychology & cognitive neuroscience, | less | pre-1980 philosophy of mind; |
more linguistics & psycholinguistics, | less | pre-1980 philosophy of language; |
more neuroaesthetics, | less | aesthetics; |
more causal models & psychology of causal perception, | less | pre-1980 theories of causation. |
(In other words: train philosophy students like they do at CMU, but even “more so.”)
So, my own “intro to philosophy” mega-course might be guided by the following core readings:
Stanovich, Rationality and the Reflective Mind (2010)
Hinman, Fundamentals of Mathematical Logic (2005)
Russell & Norvig, Artificial Intelligence: A Modern Approach (3rd edition, 2009) — contains chapters which briefly introduce probability theory, probabilistic graphical models, computational decision theory and game theory, knowledge representation, machine learning, computational epistemology, and other useful subjects
Sipser, Introduction to the Theory of Computation (3rd edition, 2012) — relevant to lots of philosophical problems, as discussed in Aaronson (2011)
Howson & Urbach, Scientific Reasoning: The Bayesian Approach (3rd edition, 2005)
Holyoak & Morrison (eds.), The Oxford Handbook of Thinking and Reasoning (2012) — contains chapters which briefly introduce the psychology of knowledge representation, concepts, categories, causal learning, explanation, argument, decision making, judgment heuristics, moral judgment, behavioral game theory, problem solving, creativity, and other useful subjects
Dolan & Sharot (eds.), Neuroscience of Preference and Choice (2011)
Krane, Modern Physics (3rd edition, 2012) — includes a brief introduction to cosmology
(There are many prerequisites to these, of course. I think philosophy should be a Highly Advanced subject of study that requires lots of prior training in maths and the sciences, like string theory but hopefully more productive.)
Once students are equipped with some of the latest math and science, then let them tackle The Big Questions. I bet they’d get farther than those raised on Plato and Kant instead.
You might also let them read 20th century analytic philosophy at that point — hopefully their training will have inoculated them from picking up bad thinking habits.
Previous post: Philosophy Needs to Trust Your Rationality Even Though It Shouldn’t
- Situating LessWrong in contemporary philosophy: An interview with Jon Livengood by 1 Jul 2020 0:37 UTC; 117 points) (
- 2012: Year in Review by 3 Jan 2013 11:56 UTC; 62 points) (
- What do professional philosophers believe, and why? by 1 May 2013 14:40 UTC; 49 points) (
- Philosophy Needs to Trust Your Rationality Even Though It Shouldn’t by 29 Nov 2012 21:00 UTC; 41 points) (
- Help Reform A Philosophy Curriculum by 8 Dec 2012 22:45 UTC; 38 points) (
- A reply to Mark Linsenmayer about philosophy by 5 Jan 2013 11:25 UTC; 30 points) (
- Complement Luke’s Mega-Course for Aspiring Philosophers by 7 Dec 2012 6:14 UTC; 8 points) (
- 26 Apr 2023 8:25 UTC; 5 points) 's comment on Philosophy by Paul Graham Link by (
- 4 Jan 2013 18:59 UTC; 5 points) 's comment on 2012: Year in Review by (
- Positive Information Diet, Take the Challenge by 1 Mar 2013 14:51 UTC; 4 points) (
- 20 Jan 2013 5:09 UTC; 4 points) 's comment on The Level Above Mine by (
- Compiling my writings for Lesswrong and others. by 22 Jul 2014 8:11 UTC; 3 points) (
- 15 Sep 2013 13:35 UTC; 2 points) 's comment on College courses versus LessWrong by (
- 27 May 2015 17:31 UTC; 1 point) 's comment on Questions about Effective Altruism in academia (RAQ) by (EA Forum;
- 5 Sep 2022 10:39 UTC; 1 point) 's comment on Q Home’s Shortform by (
- 23 Jan 2013 12:51 UTC; 1 point) 's comment on Muehlhauser-Goertzel Dialogue, Part 1 by (
- 25 Mar 2013 21:25 UTC; 0 points) 's comment on Schools Proliferating Without Evidence by (
Luke, do you have any ideas how to reform philosophy education and professional practice without antagonizing a lot of current professional philosophers and their students and having the debate degenerate into a blue-vs-green tribal fight? Or more generally see much chance of success for such an attempt? If not, maybe you should reframe your posts (or at least future ones) as being aimed at amateur philosophers, autodidacts, CS and math majors interested in doing FAI research, and the like?
Yes, this is my intention. I don’t think I can reform how philosophy is taught at universities quickly enough to make a difference. My purpose, then, is to help “amateur philosophers, autodidacts, CS and math majors interested in doing FAI research” so that they can become better philosophical thinkers outside the university system, and avoid being mind-poisoned by a standard philosophical education.
Quickly enough? You think you can do it all??
Of course. Do you think it’s impossible, or that there’s a task Luke isn’t up to? The first seems intuitively more plausible to me than the second.
I think it’s a task Luke isn’t up to. To single-handedly reform teaching like that you would have to be a renowned philosopher or educationalist, a Dewey or Erasmus, not a twenty-something blogger. His understanding of philosophy is barely up to undergraduate level. Sorry, but that’s the way it is.
You pointed out that Luke has not started trying to do X, as evidence that he wouldn’t be up to the task of doing X. You don’t seem to understand how to do things.
When you want to accomplish a major goal, you need to do a lot of other things first. You need to get clear on what your goal is. You need to do research and accumulate the prerequisite knowledge. You need to accumulate any necessary resources. You probably need to put together a team. You may need to invent some new technologies.
I have absolutely no doubt that if he wanted to, Luke could do all the prerequisite steps and then reform Philosophy. If your hypothesis is correct, he’d in the process become a renowned philosopher of education like Dewey.
Though I would not bet against him being able to pull it off as a twenty-something blogger.
Most people could not single-handedly reform philosophy. There has to be some evidence that Luke is more capable of doing it than most people, or else we are quite sure he is not up to the task by default.
This is Luke Muehlhauser we’re talking about.
Okay, and that’s an argument; one which has… uh… interesting validity. I’m not sure how to condition on Alicorn’s dinner parties as evidence, though, so let’s set that aside for now. Would you say, at least, that the fact I am not a renowned philosopher is sufficient to conclude, pending further evidence, that I’m incapable of reforming philosophy?
Edit: in the interests of maintaining my anonymity, let’s assume for the sake of argument that I am not, in fact, a renowned philosopher; this should not be taken as indicative of my actual status in the philosophy world one way or the other.
Not given background knowledge. You’re on Less Wrong, so there is high probability that you’re capable of becoming capable of arbitrary possible things. And capability is transitive, so that means there is high probability that you’re capable of that particular thing.
Most people aren’t already renowned philosophers, and most of those don’t reform philosophy, and for those that did, they usually became renowned in the process of reforming philosophy, so that’s not much evidence either way.
And that’s an argument; one which has… uh… interesting validity.
Can’t argue with that.
Not sure why you feel the need to remind us...
?????
Luke has started to do it, in a sense.. writing an article saying “reform philosophy”, is starting, in a sense. it just isn’t starting in the right place—the place where you get the credentials and the competence beofore you throw your weight around.
Source?
Common sense. The way the world works. If you were a specialist in some subject, would you accept the subject being turned upside down by someone who didn’t know the subject?
That’s what I thought. Considering the downvotes you’ve been getting, either you’re being karmassassinated or people don’t think that’s enough; perhaps because we spend a lot of time here talking about how poor rationality affects many fields (e.g. psychiatry, philosophy, AI...)
I feel like the phrasing “barely up to undergraduate level” is like saying something is “basic” or “textbook” not when it’s actually basic or textbook but because it insinuates there is an ocean of knowledge that your opponent has yet to cross. If luke is “barely undergraduate” then I know a lot of philosophy undergrads who might as well not call themselves that.
While I agree that reform is far more likely to be done by a Dewey or Erasmus, your reasoning gives me a very “you must be accepted into our system if you want to criticize it” vibe.
While it’s not actually impossible to reform the teaching on a subject without yourself reaching the highest level in knowledge of it you wish to teach, it is bloody hard.
Who arent trying to reform the subject.
It’s not that. There is just no practical possibility of philosophy, or any other subject, being reformed by someone who does not have a very good grasp of it. You need a good grasp of it just to dagnose the problems.
The former is definitely possible, given that it’s almost continuously actual. Philosophical education is reformed all the time. The latter will be difficult for Luke to do directly, just because accomplishing the reform comes down to convincing philosophers to do things differently, and philosophers are unlikely to be exposed to Luke’s work. And, has been mentioned, Luke’s writings on the subject are not presently set up to convince philosophers.
I think the counterfactual under consideration was where Luke actually tries. That his writings are not presently set up for that is just arguing with the setup of the thought experiment.
Fair enough, though the exposure bit was my main point.
Do you think they would find it convincing if they were?
What is your strategy for doing this, other than posting articles on Less Wrong ?
Hundreds of hours of personal conversation with promising people. Also, Louie is putting together a list of classes to take at various universities.
I don’t think this approach scales very well. Though I may be overestimating the number of people who are interested in philosophy as well as capable of doing FAI research.
This approach will scale a lot better, but it is riskier. Presumably, these specific classes will help the student to “avoid being mind-poisoned by a standard philosophical education”; but what if the students enjoy the course, and end up diving head-first into the standard philosophical education, after all ?
I appreciate your sentiment; I’m one of those people who actually got an undergraduate degree in Philosophy. Ivory tower thinking has been detrimental to philosophy but the changes your purposing would destroy philosophy education as its been practiced for well over 2000 years.
Maybe you think that’s a good thing, having been through the education I do not. Philosophy, or rather the study of old dead philosophers, is not for the sake of their ideas but for the developing of a thought paradigm. The course you would be creating is not philosophy, instead it is something more akin to, “How does science explain reality?”
Moreover, most disciplines were birthed in philosophy, eventually becoming its own discipline and there there’s the whole philosopher-mathematician love affair because two have been linked pretty closely for awhile . There’s a reason why you get a PhD (Doctorate of Philosophy).
So in essence, you went and cherry-picked stupid abstracts to prove your point. Yes, there are many ivory-tower philosophers who are adding nothing to our knowledge base. But no, the answer is not to sink the ship.
Go spend three months with Hegel’s Phenomonolgy of Spirit; it won’t change how you view the world but it’ll sharpen your mind; same goes for Kant’s Critique of Pure Reason.
For what its worth, I’m a physics/cs major and I wish I’d seen this article two years ago so I wouldn’t have wasted my credits on two philosophy classes.
Don’t be deterred from learning philosophy—just think carefully about to do it. A decent AI class, for example, will almost certainly cover a lot of what Luke mentioned in his ideal curriculum.
I still don’t see this as sufficiently different from a blue-green tribal fight—there’s a lot of “quantitative/Bayesian approaches are the way to go, and everyone else sucks”. By targeting everyone who is not an established philosopher, you’re just demonstrating that you’re smart enough to make this divide along generational lines (which is, as Kuhn tells us, how new paradigms succeed).
The most polite way would be to call it a new subset of philosophy, let’s say “Scientific Philosophy” (or something else if this name is already taken), and then open Scientific Philosophy courses. Nobody would get offended by this.
On the other hand, it would give people easy opportunity to ignore it. They could just teach Philosophy as they did before… and perhaps include one useless short lecture on Scientific Philosophy just to show that: yeah, they heard about it.
Isn’t that one of those things like “they couldn’t hit an elephant at this distance” which people traditionally say right before being horribly surprised?
‘Most polite’? Suggesting that all other philosophical approaches are ‘unscientific’ is not very diplomatic. There’s no need for new jargon; just call it what it is, a course in Critical Thinking. This solves the problem of ‘philosophy’ being a terribly ill-defined word to begin with, rather than compounding the problem with poorly-defined terms like ‘experimental’ or ‘scientific.’
A big problem with “Critical thinking”, at least in the UK, is that our government introduced a new school subject called “Critical thinking” in which it was much easier to do well in the exams than other subjects. This was extrem, to the point that some schools (to boost their average grades) made every student do critical thinking in addition to the normal allocation of 3 A-levels subjects. This is really extreme, every student is doing only 3 subjects each. You give them all another subject to do. If this new subject is comparable that is a 33% increase in workload—and the point is to increase the average grade in exams. But it still worked. “Critical thinking” is now a joke meme meaning “empty subject of no substance and easy passes”.
To people who have been through this system saying a person “has a degre in critical thinking” sounds like a sideways way of insulting there inteligence. I think this is what diegocaleiro is refering to.
No one wants to graduate a Critical Thinker.
.
Then that needs to change. I’m fine with coining new words for utilitarian purposes, but ‘critical thought’ is such a semantically transparent umbrella terms for all the things we want to promote — certainly its scope and significance is more immediately obvious than that of ‘rationality,’ ‘philosophy,’ ‘science,’ etc. — that it concerns me how hard rationalists sometimes work to avoid promoting that term. It’s cheesier and less edgy in connotation than some of the other terms, but that mainstream valence works to our advantage in some contexts.
How sure are you of this? Has anyone been given the opportunity to invest their own time and money to do so?
Fair point. it already exists, but is rarely a major. People want to apply CT to something.
Critical Thinking already exists and includes ”… identification of prejudice, bias, propaganda, self-deception, distortion, misinformation, etc”,(WP). What’s the difference between that an LessWrongism? Bayes?
It is already taken (see Reichenbach’s The Rise of Scientific Philosophy), but it arguably means something very similar to what Luke seems to be advocating anyway (that is to say, it seems to be in the same direction that Carnap, Reichenbach, and some of the other logical empiricists were moving in after the mid-20th century), so I don’t think it would be much of a problem.
Georgetown University is a prestigious university. They “reformed” medicine education by introducing a new ‘Complementary and Alternative Medicine(CAM)’-program in 2003.
Most mainstream medicine professors don’t like alternative medicine. They still didn’t succeed in blocking the CAM program. The CAM people didn’t get their program by avoiding to antagonize mainstream medicine.
LessWrong is filled with a bunch very smart people and highly skilled people in their mid twenties. In one or two decades there a good chance that a fair number of those people are in positions of power. Maybe not enough power to get every university to teach all philosophy courses this way, but enough power to get a few university to make courses to teach philosophy that way.
In a decade Singularity University might be a bigger institution that opens a philosophy bachlor program that teaches philosophy according to the way Luke proposes.
Just because there no way to get such a philosophy program in the next five years, doesn’t mean that it’s an impossible long-term goal. Trying to avoid to antagonize the establishment is a bad strategy when you to create bigger changes in society.
A better way to put this is “listen to your supporters, not your enemies.” When you want big changes, the establishment will often be your enemy, but it is rarely sensible to assume that they will be.
I don’t think so. In this case it’s more: “Say what you consider to be right, regardles of what other people say.” Don’t tone down your message because it might annoy the establishment. Don’t focus on saying what’s popular.
I don’t think lukeprog wrote the post because being anti-academic philosophy is hip on LessWrong. I don’t think that should be his main consideration when he decides how he writes his posts.
If you focus on saying stuff that might give you a tactical advantage in the moment instead of focusing on having a meaningful message, you are unlikely to say stuff with meaningful long-term impact.
By “a better way to put this” I was referring to the insight of the underlying strategic consideration; good advice rarely takes the form of “don’t take tactics into account, do what feels good.” If your supporters are the type to be fired up by anti-establishment talk, then fire up your supporters; if you would do better with supporters in the establishment, then don’t scare them away because you were harsher than you needed to be.
Compare “philosophers don’t have their act together, this is what it would look like if they did” with “we’re partnering with some professors to launch a MOOC on how to do philosophy from the LW perspective, starting with Pearl and Kahneman and focusing on how to dissolve questions.”
Luke still could have said what he said with a whole lot more tact.
.
At first I interpreted this as some kind of meta-post-modernist-abstract-(insert buzzword) comment.
This seems like an unavoidable problem with professional psychoanalysts. But philosophers are, up to certain age at least, willing to change their minds. It could be targeted at the first few years of undergrad. I’ve seen people change from “the dead old guys” to good stuff. Just give them a chance! And a figure of high status (Bostrom and Russell come to mind) to be inspired by.
On the spirit of Viliam_Bur below here. I would strongly argue in favour of it being a new course, called “Philosophy Given Science”.
There is no way professional philosophers would learn all that Luke (and reality, because reality doesn’t care about your brains capacity to grasp what is necessary to undertake the Big questions) would like them to. Leave them the name “Philosophy”.
A new course would be great. “Given” somehow brings connotations of probability and Bayes, which is good. Trouble: The Mega-course depends on having thousands of free hours to read several topics that take a semester each to teach. Probably the entire extention of the thing would span longer then a Medical course nowadays does. Except some very lucky philosophers/autodidacts like Luke, who have the discipline, cognitive capacity, time and resources to actually learn all that, nearly no one would be able to learn it all.
It sucks, when the problems set by nature and reality are not proportional to human cognition/nature/condition/capitalism/constraints.
EDIT: I have decided to tranform this comment into a post in discussion Complement Luke’s List. The post contains also the beggining of a list of philosophy that is consistent with what Luke posts here. His layer here (science) should precede, but not substitute, the philosophy layer being forged there (with recommendations by Bostrom, Dennett, Luke himself etc...)
Why have you posted a picture of my mother’s murderer? You monster, Mr. Shulman, how dare you… I had nearly forgotten that terrible ordeal.
Provocative article. I agree that philosophers should be reading Pearl and Kahneman. I even agree that philosophers should spend more time with Pearl and Kahneman (and lots of other contemporary thinkers) than they do with Plato and Kant. But then, that pretty much describes my own graduate training in philosophy. And it describes the graduate training (at a very different school) received by many of the students in the department where I now teach. I recognize that my experience may be unusual, but I wonder if philosophy and philosophical training really are the way you think they are.
Bearing in mind that my own experiences may be quite unusual, I present some musings on the article nonetheless:
(1) You seem to think that philosophical training involves a lot of Aristotelian ideas (see your entries for “pre-1980 theories of causation” and “term logic”). In my philosophical education, including as an undergraduate, I took two courses that were explicitly concerned with Aristotle. Both of them were explicitly labeled as “history of philosophy” courses. Students are sometimes taught bits of Aristotelian (and Medieval) syllogistic, but those ideas are never, so far as I know, the main things taught in logic (as opposed to history) courses. In the freshman-level logic course that I teach, we build a natural deduction system up through first-order logic (with identity), plus a bit of simplified axiomatic set theory (extensionality, an axiom for the empty set instead of the axiom of comprehension, pairing, union, and power set), and a bit of probability theory for finite sample spaces (since I’m not allowed to assume that freshmen have had calculus). We cover Aristotle’s logic in less than one lecture, as a note on categorical sentences when we get to first-order logic. And really, we only do that because it is useful to see that “Some Ss are Ps” is the negation of “No Ss are Ps,” before thinking about how to solve probability problems like finding the probability of at least one six in three tosses of a fair die. Critical thinking courses are almost always service courses directed at non-philosophers.
(2) You seem to think that philosophers do a lot of conceptual analysis, rather than empirical work. In my own philosophy education, I was told that conceptual analysis does not work and that with perhaps the exception of Tarski’s analysis of logical consequence, there have been no successful conceptual analyses of philosophically interesting concepts. Moreover, I had several classes—classes where the concern was with how people think (either in general or about specific things) -- where we paid attention to contemporary psychology, cognitive science, and neuroscience. In fact, restricting attention to material assigned in philosophy classes I have taken, you would find more Kahneman and Tversky than you would Plato or Kant. And you would also find a lot of other psychologists and cognitive scientists, including Gopnik, Cheng, Penn, Povinelli, Sloman, Wolff, Marr, Gibson, Damasio, and so on and so forth. Graduate students in my department are generally distrustful of their own intuitions and look for empirical ways to get at concepts (when they even care about concepts). For example, one excellent student in my department, Zach Horne, has been thinking a bit about the analysis of knowledge (which is by no means the central problem in contemporary epistemology), but he’s attacking the problem via experiments involving semantic integration. And I’ve done my own experimental work on the analysis of knowledge, though the experiments were not as clever.
(3) You seem to think that philosophy before 1980 (why that date??) is not sufficiently connected to actual science to be worth reading, and that this is mostly what philosophers read. Both are, I think, incorrect claims.
With respect to the first claim, there is lots of philosophical work before 1980 that is both closely engaged with contemporaneous science and amazingly useful to read. Take a look at Carnap’s article on “Testability and Meaning,” or his book on The Logical Foundations of Probability. Read through Reichenbach’s book on The Direction of Time. These books definitely repay close reading. All of Russell’s work was written before 1980 -- since he died in 1970! Wittgenstein’s later work is enormously useful for preventing unnecessary disputes about words, but it was written before 1980. This shouldn’t be surprising. After all, lots of scientific, mathematical, and statistical work from before 1980 is well worth reading today. Lots of the heuristics and biases literature from the ’70s is still great to read. Savage’s Foundations of Statistics is definitely worth reading today. As is lots of material from de Finetti, Good, Turing, Wright, Neyman, Simon, and many others. Feynman’s The Character of Physical Law was a lecture series delivered in 1960. Is it past its expiration date? It’s not the place to go for cutting edge physics, but I would highly recommend it as reading for an undergraduate. I might assign a chunk of it in my undergraduate philosophy of science course next semester. (Unless you convince me it’s a really, really bad idea.) Why think that philosophical work ages worse than scientific work?
With respect to the second claim, you might be right with respect to undergraduate education. On the other hand, undergraduate physics education isn’t a whole lot better (if any), is it? But with respect to graduate training, it seems to me that if one is interested in contemporary problems, rather than caring about the history of ideas, one reads primarily contemporary philosophers. In a typical philosophy course on causation, I would guess you read more of David Lewis than anyone. But that’s not so bad, since Lewis’ ideas are very closely connected to Pearl on the one hand and the dominant approaches to causal inference in statistics on the other. The syllabus and reading lists for the graduate seminar on causation that I am just wrapping up teaching are here, in case you want to see the way I approach teaching the topic. I’ll just note that in my smallish seminar (about eight people—six enrolled for credit) two people are writing on decision theory, two are writing on how to use causal Bayes nets to do counterfactual reasoning, and one is writing on the contextual unanimity requirement in probabilistic accounts of causation. Only one person is doing what might be considered an historical project.
Rather than giving a very artificial cut-off date, it seems to me we ought to be reading good philosophy from whenever it comes. Sometimes, that will mean reading old-but-good work from Bacon or Boole or (yes) Kant or Peirce or Carnap. And that is okay.
(4) You seem to endorse Glymour’s recommendation that philosophy departments be judged based on the external funding they pull in. On the other hand, you say there should be less philosophical work (or training at least) on free will. As I pointed out the first time you mentioned Glymour’s manifesto, there is more than a little tension here, since work on free will (which you and I and probably Glymour don’t care about) does get external funding. (In any event, this is more than a little odd, since it typically isn’t the way funding of university departments works in the humanities, anyway, where most funding is tied to teaching rather than to research and where most salaries are pathetically small in comparison with STEM counterparts.) Where I really agree with Glymour is in thinking that philosophy departments ought to be shelter for iconoclasts. But in that case, philosophy should be understood to be the discipline that houses the weirdos. We should then keep a look-out for good ideas coming from philosophy, since those rare gems are often worth quite a lot, but we also shouldn’t panic when the discipline looks like it’s run by a bunch of weirdos. In fact, I think this is pretty close to being exactly what contemporary philosophy actually is as a discipline.
I’m sure I could say a lot more, but this comment is already excessively long. Perhaps the take-away should be this. Set aside the question of how philosophy is taught now. I am receptive to teaching philosophy in a better way. I want the best minds to be studying and doing philosophy. (And if I can’t get that, then I would at least like the best minds to see that there is value in doing philosophy even if they decide to spend their effort elsewhere.) If I can pull in the best people by learning and teaching more artificial intelligence or statistics or whatever, I’m game. I teach a lot of that now, but even if I didn’t, I hope I would be more interested in inspiring people to learn and think and push civilization forward than in business as usual.
EDIT: I guess markdown language didn’t like my numbering scheme. (I really wish we had a preview window for comments.)
You did indeed have an unusual philosophical training. In fact, the head of your dissertation committee was a co-author with Glymour on the work that Pearl built on with Causality.
Not really. Term logic is my only mention of Aristotle, and I know that philosophy departments focus on first-order logic and not term logic these days. Your training was not unusual in this matter. First-order logic training is good, which is why I said there should be more of it (as part of mathematical logic).
Good, but this is not the norm. Machery was also on your dissertation committee; the author of Doing Without Concepts, a book I’ve previously endorsed to some degree.
Of course. There are a few shining exemplars of scientific, formal philosophy prioer to 1980. That’s what I recommended philosophers be trained with “less” pre-1980s stuff, not “no” pre-1980s stuff.
I was, in fact, aware of that. ;)
In the grand scheme of things, I may have had an odd education. However, it’s not like I’m the only student that Glymour, Spirtes, Machery, and many of my other teachers have had. Basically every student who went through Pitt HPS or CMU’s Philosophy Department had the same or deeper exposure to psychology, cognitive science, neuroscience, causal Bayes nets, confirmation theory, etc. Either that, or they got an enormous helping of algebraic quantum field theory, gauge theory, and other philosophy of physics stuff.
You might argue that these are very unusual departments, and I am inclined to agree with you. But only weakly. If you look at Michigan or Rutgers, you find lots of people doing excellent work in decision theory, confirmation theory, philosophy of physics, philosophy of cognitive science, experimental philosophy, etc. A cluster of schools in the New York area—all pretty highly ranked—do the same things. So do schools in California, like Stanford, UC Irvine, and UCSD. My rough estimate is that 20-25% of all philosophical education at schools in Leiter’s Top 25 is pretty similar to mine. Not a majority, but not a small chunk, either, given how much of philosophy is devoted to ethics. That is, of course, just an educated guess. I don’t have a data-driven analysis of what philosophical training looks like, but then neither do you. Hence, I think we should be cautious about making sweeping claims about what philosophical training looks like. It might not look the way you think it looks, and from the inside, it doesn’t seem to look the way you say it looks. Data are needed if we want to say anything with any kind of confidence.
Your pre-1980s causation link goes to a subsection of the wiki on causality, which subsection is on Aristotle’s theory of causation. The rest of the article is so ill-organized that I couldn’t tell which things you meant to be pointing to. So, I defaulted to “Whatever the link first takes me to,” which was Aristotle. Maybe you thought it went somewhere else or meant to be pointing to something else?
Anyway, I know I have a tendency only to criticize, where I should also be flagging agreement. I agree with a lot of what you’re saying here and elsewhere. Don’t forget that you have allies in establishment philosophy.
Of course. I said it for the benefit of others. But I guess I should have said “As I’m sure you know...”
I think you might be reading too much into what I’ve claimed in my article. I said things like:
“Not all philosophy is this bad, but much of it is bad enough...” (not, e.g. “most philosophy is this bad”)
“you’ll find that [these classes] spend a lot of time with...” (not, e.g., “spend most of their time with...”)
“More X… less Y...” (not, e.g., “X, not Y”)
No, the link goes to the “Western Philosophy” section (see the URL), the first subsection of which happens to be Aristotle.
You might be right that I’m reading too much into what you’ve written. However, I suspect (especially given the other comments in this thread and the comments on the reddit thread) that the reading “Philosophy is overwhelmingly bad and should be killed with fire,” is the one that readers are most likely to actually give to what you’ve written. I don’t know whether there is a good way to both (a) make the points you want to make about improving philosophy education and (b) make the stronger reading unlikely.
I’m curious: if you couldn’t have your whole mega-course (which seems more like the basis for a degree program than the basis for a single course, really), what one or two concrete course offerings would you want to see in every philosophy program? I ask because while I may not be able to change my whole department, I do have some freedom in which courses I teach and how I teach them. If you are planning to cover this in more detail in upcoming posts, feel free to ignore the question here.
Also, I did understand what you were up to with the Spirtes reference, I just thought it was funny. I tried to imagine what the world would have had to be like for me to have been surprised by finding out that Spirtes was the lead author on Causation, Prediction, and Search, and that made me smile.
Yes; hopefully I can do better in my next post.
One course I’d want in every philosophy curriculum would be something like “The Science of Changing Your Mind,” based on the more epistemically-focused stuff that CFAR is learning how to teach to people. This course offering doesn’t exist yet, but if it did then it would be a course which has people drill the particular skills involved in Not Fooling Oneself. You know, teachable rationality skills: be specific, avoid motivated cognition, get curious, etc. — but after we’ve figured out how to teach these things effectively, and aren’t just guessing at which exercises might be effective. (Why this? Because Philosophy Needs to Trust Your Rationality Even Though It Shouldn’t.)
Though it doesn’t yet exist, if such a course sounds as helpful to you as it does to me, then you could of course try to work with CFAR and other interested parties to try to develop such a course. CFAR is already working with Nobel laureate Saul Perlmutter at Berkeley to develop some kind of course on rationality, though I don’t have the details. I know CFAR president Julia Galef is particularly passionate about the relevance of trainable rationality skills to successful philosophical practice.
What about courses that could e.g. be run from existing textbooks? It is difficult to suggest entry-level courses that would be useful. Aaronson’s course Philosophy and Theoretical Computer Science could be good, but it seems to require significant background in computability and complexity theory.
One candidate might be a course in probability theory and its implications for philosophy of science — the kind of material covered in the early chapters of Koller & Friedman (2009) and then Howson & Urbach (2005) (or, more briefly, Yudwkosky 2005).
Another candidate would be a course on experimental philosophy, perhaps expanding on Alexander (2012).
I am interested. Should I contact Julia directly or is there something else I should do in order to get involved?
Also, since you mention Alexander’s book, let me make a shameless plug here: Justin Sytsma and I just finished a draft of our own introduction to experimental philosophy, which is under contract with Broadview and should be in print in the next year or so.
I look forward to your book with Sytsma! Yes, contact Julia directly.
Is this an open invitation? Because such a course sounds even more helpful to me than it does to you, I suspect. I probably have a lot of catching up, learning and de-corrupting to do on myself before I’m at a level that would be useful rather than waste CFAR’s* time, though.
As a point of reference, I’ve recently been shifting my life goals towards the objective of reducing and understanding “knowledge” and “expertise” as quantifiable, reduced atomic units that can be discussed, acquired and evaluated on the same level of detail and precision as, say, electronic equipment or construction machinery is currently for IT businesses or construction contractors.
I suspect my best path towards this is through an in-depth analytic study of inferential distance and the interlocking of concepts into ideas, and how this could be fully reduced into units of knowledge and information such that it would always be clear, visible and obvious to a tutor exactly which specific units are required to get from A to B on a certain topic, and easy to evaluate which one is lacking in a student.
However, while people are often impressed with just the above statements, I cringe at the fact that I can only say it, and am only grasping at straws and vague mental handles when trying to make sense out of it and actually work on the problem. And it feels almost like an applause light to say this to you, but it seems like everything in this area is… just… going… too… slow… and that really bugs me a lot.
* and those “other interested parties” (Who are they, if you know any examples?)
Of course, you may always contact CFAR about such things. Whether it goes any further than that will vary.
As for “other interested parties,” I recall coming across philosophy and psychology professors who wanted to develop CFAR-like courses for university students, but I don’t recall who they are.
I second that.
Excellent post overall.
I particularly agree with this part. The project of regimenting philosophy to conform to someone’s ideas of correctness or meaningfullness or worth isn’t just objectionably illiberal, although it is, it is counterporductive, because you need some disciple that houses the weirdos. If none of them do, then those leftfield ideas are going to slip through the cracks.
It’s a graduate level text; to benefit from it adequately, one should have at least a pure math undergraduate major worth of training, including the first courses in naive set theory and formal logic. A text that’s too advanced for one’s level risks confusing the reader, introducing intuitive misconceptions and giving the illusion of understanding, so it’s better to read up on something much more basic. This book might be a long term goal that contributes to shaping the curriculum, but then it should be understood that there are at least 20 books before it on the reading list.
Right; many of my selections presume prior training. I do think philosophy should be a Highly Advanced subject of study that requires lots of prior training in maths and the sciences, like string theory but hopefully more productive.
Do you think you could offer a list of bottom-up books, that don’t rely in much more than what you’d expect of a person before they start uni (and a basic programming ability)?
I know I was thinking of buying the afore mentioned book, but I’m roughly at the academic level I just described, and reviews seem to indicate that it (and the AI—Modern Approach) might need a bit more.
Edit: Or even, could you please indicate the sort of prior level required for your actual ‘Intro to Philosophy Mega-Course’. I found that some books (I forget if they’re from that list or the above bigger one) had very few helpful reviews on amazon or elsewhere, and I’m apprehensive in buying an expensive book I might not be able to use. Thanks.
I’ve only read the first ~ten chapters, but AIMA is relatively accessible. It requires you to go at your own proper pace, and notice when you haven’t fully comprehended something, but it doesn’t assume too much background. If you aren’t shy about googling/wikipedia-ing, you should be fine.
Thanks.
I agree that modern science provides valuable insights into philosophical problems. I also agree that Bayesian probability theory and machine learning are powerful models for approaching problems in epistemology. This is why I’m in grad school in machine learning, and not for philosophy. Furthermore, I’m not a big fan of ancient philosophers (especially ones who think categories are absolute), and I’d like to see the computational theory of mind excised from popular thought, in favor of something closer to embodied cognition. I actually really like the idea of incorporating modern theories and empirical discoveries into a philosophical curriculum.
Despite this, I have a strong negative reaction to your post, because it suggests there is One True Way to do philosophy and that everyone who does not follow the Ways of Bayes is doing it wrong. The last thing I want us teaching students is any kind of absolutism. It can only damage students to tell them that our current models are the true models, and all past thinkers were necessarily wrong. It would also damage students to restrict them to one philosophical viewpoint; as much as I like Bayesian reasoning and empiricism, I think it would hurt students to teach them that these methods are the One True Way, because it would prevent them from exploring alternative viewpoints.
I think that students of philosophy should be taught as many theories as possible, both ancient and modern. By coming to understand the diverse range of models that we’ve applied over the course of human history, students can learn some humility. Just as all of these past models were superceded, our current theories will inevitably be replaced. Just as we can spot the glaring errors in past philosophical models, the people of the future will spot the “obvious” follies in our own ideas.
Also, the more models that students learn, the more “degrees of freedom” they will realize exist. They will come to understand along which dimensions worldviews can vary; they can then explore other options for these dimensions, or discover new dimensions that no one has tried varying yet. I strongly believe that learning more worldviews is a powerful method of keeping one’s mind flexible enough to come up with genuinely new ideas.
Lastly, as much as I love mathematical models and rigorous empiricism, I oppose the trend of applying them haphazardly to the social sciences. If we’re studying e.g. anthropology, I think it’s a mistake to favor statistical data over first-hand accounts or subjective analyses. Not because there’s anything inherently wrong with empirical and statistical methods, but because the models we use are too simple. There are so many features, and it’s hard to account for all of them, both because we don’t know which features to choose, and because inference is computationally intractable in such an enormous model. Fortunately, the typical human brain comes prepackaged with empathy and a theory of mind, a powerful module for modeling the behaviors/preferences/internal experiences of other humans. Certainly, this module is subject to biases and might make systematic errors when reasoning. But when choosing between two imperfect models, I tend to think our built-in circuitry is better suited for the social sciences than tools of machine learning. I assume that our built-in intuitive machinery is useful for some branches of philosophy as well.
You are not supposed to teach them it’s the One True Way, just that it’s The Best Way Anyone Have Found So Far By A Fair Margin.
This also seems problematic, for the same reasons.
And what if it is? I am not claiming this is so. It is rhetorical. What then?
Teach the best case that there is for each of several popular opinions. Give the students assignments about the interactions of these different opinions, and let/require the students the students to debate which ones are best, but don’t give a one-sided approach.
Best for what? The idea that you can solve philosphical problems with science is lacking in examples of philosophical problems that have been solved with science. There is just the hope that methods that have worked in one field will work with another.
It grieves me to note that almost all the arguments in your post could be applied, mutatis mutandis, to why we should teach kids intelligent design as well as evolution.
In certain contexts, yeah. I think most kids would be able to smell the bullshit if both theories were laid out side-by-side, with historical context available.
Illusion of transparency? I believe some creationists say the same thing.
Sounds like an opportunity for a variation on the old Priests of Baal routine, then.
Without any comment on if the post is correct or not, I want to note that if the sequences have done their job LWers will not be pursuaded by this post. It looks at a large number of abstracts, picks a non representive (and small) sample and then quotes them to make them salient in the reader’s mind.
It could have been made more convincing by using a less biased sampling such as generating 3 random numbers for each journal, than multiplying by the number of total articles in the journal and then posting the abstract for those articles.
Hang on. Did you mean to say that the conclusion of this post is wrong? If the sequences did their job, then LWers should steelman the arguments, and be persuaded if-f the conclusion is correct, regardless of the arguments presented.
It’s hard to steelman, for example, an incorrect proof of Fermat’s Theorem in the way you describe.
Filling in the gaps in this post requires doing some research into the current state of philosophy. Some of the commenters are in fact trying to do just that. But it’s much harder to lay an egg than to tell if one is rotten.
I was about to write a post saying how even though we are aware this is a biased sample, the fact that 4 papers with questionable thinking appeared in top journals recently is still a lot of evidence. Then, I looked at how recent “recently” is. Two papers are from 2012, one is from 2011, and one is from 2010.
The fact that Luke went back as far as 2 years suggests that the field either isn’t that bad, or Luke did look chronologically. If it’s the first, then I would update away from it being a diseased field, because even in top journals I would expect a few bad papers a year. If it’s the latter, then Luke should let us know.
I had guessed that Luke picked out what he thought were good representative samples, since he is probably familiar enough with the field to do so.
Do we know if this is representative, or just the worst ones?
Let’s add some data. Noûs is the second-highest rated general philosophy journal. Here are its 2012 articles, with abstracts/introductions:
Concluded:
I think that suffices. So… does this help us determine whether philosophy is useful? Are they Doing It Wrong?
I’ve only gone through some of these and I’ll probably be spending the next few hours on all these various tabs now opened, but I would tentatively conclude that the original selection of articles presented was misleading and not fully representative.
Continued:
So the only one of these that jumps out at me as being really unhelpful is
This fails at multiple levels. First it fails, because pretty much everything Kant wrote about geometry runs into the serious problem that his whole idea is deeply connected to Euclidean geometry being the one, true correct geometry. Second, this runs into the earlier discussed problem of trying to discuss what major philosophers meant, as if that had intrinsic interest. Third, a glance strongly suggests that they are ignoring the large body of actual developmental psych data about how children actually do and do not demonstrate intuitions for their surrounding geometry.
I don’t know enough about the subjects to say much about the Skow, Uzquiano, and Button although I suspect that the third is confusing linguistic with metaphysical issues.
I agree with most of your objections and I think we must, at this point, notice how different this selection of articles looks from what Luke originally presented.
Since you’re criticizing an article based on my own chosen excerpts of it, it would be irresponsible of me not to give fuller quotes so that Dunlop can respond:
The criticism has been made that philosophers waste too much time on historical exegesis; but I found surprisingly very little historical work in Noûs, and even this Kant stuff is surprisingly relevant to some of the contemporary issues we ourselves have been debating recently—concerning the relationship between imagined or constructed ‘mathematical reality’ and the empirical world, the dependence of logical truth upon thought, etc.
Ok. This excerpt gives me a much higher opinion of the piece in question and substantially reduces the validity of my criticisms. Since this was the article that most strongly seemed to support the sort of point that Luke was making, I’m forced to update strongly against Luke’s selected papers being at all representative.
I don’t recall him ever restricting himself to only Euclidean geometry. In Critique of Pure Reason, “geometry” is mentioned twenty times (each paragraph a separate quote; Markdown is being dumb):
Other than this last quote (which is simply wrong), all of the other mentions consider geometry either as 1) a mere example or 2) in the context of phenomenal experience, which is predominately Euclidean for standard human beings on Earth. One could easily take it as a partial statement of the psychological unity of humankind.
He doesn’t discuss it that much, but there’s a strong argument that it is operating the background 1 (pdf). The same author as linked wrote an essay about this, but I can’t find it right now.
This is strange, because your link is about Kant disagreeing with other philosophers on the nature of Euclid’s parallel postulate. I took your claim to be that because Kant was seemingly only aware of Euclidean geometry, he used properties specific to only Euclidean geometry in his discussion of geometry.
Show me explicitly where this “operating in the background” is, and I’d be more convinced.
Hmm, ok. Rereading the link and thinking about this more, it looks like I’m either strongly misremembering what it said or am just hopelessly confused. I’ll need to think about this more.
Thinking about this less and something else more is also a good option.
That’s true, but it doesn’t sound relevant to the subject of the article.
A solid blow.
That might be relevant to Strawson’s view, I’m not actually sure what he says, but it’s not relevant to Kant’s view. ‘A priori’ does not mean ‘innate’ or biologically determined.
It doesn’t to mondern philosophers, but the way it was used by Kant it seems like he meant it very close to how we would use “innate”.
No, Kant thought that you could only have synthetic a priori knowledge if you already had a fair amount of experience with the world. Synthetic a priori knowledge is knowledge which rests on experience (Kant thinks all knowledge begins with experience), but it doesn’t make reference to specific experiences. Likewise, analytic a priori knowledge requires knowledge of language and logic, which, of course, is not innate either. Kant doesn’t think there’s any such thing as innate knowledge, if this means knowledge temporally prior to any experience.
This has it about right: http://en.wikipedia.org/wiki/A_priori_and_a_posteriori#Immanuel_Kant
Okay, so we’ve more or less determined that the stuff going on at Nous is very different than what the post presented. However, looking at The Philosophical Review and Mind’s recent issues have set off some alarming bells in my mind. Unfortunately, I don’t have time to investigate further, but we may need to consider the possibility that the texts were representative and Nous is just a superior journal (in LW terms) than the others...
Anyone familiar enough with a given set to be able to pick a representative subset is also capable of picking a non-representative subset if so motivated.
I have a feeling I will have a lot to say about this posting, but I will start with one small issue: what is the watershed that occurred in metaphycs circa 1980? I’m pretty sure the Wikipedia article isn’t going to tell me, because I wrote the “history and schools” section.
I chose 1980 because after 1980 there are at least a few philosophers studying these subjects who are attuned to contemporary science, compsci, and maths to get things basically right. E.g. after 1980 some philosophers of mind decided to just start agreeing with what cognitive scientists were discovering at the time.
And none before? Kant was immensely influenced by Newton, to take but one example. That was more like 1780
Kant was a scientist, even in the modern sense (and a pretty decent one too). For instance, Kant was one of the first proponents of the nebular hypothesis.
It seems a strange date to choose in some cases. If anything, philosophy of mind was a lot more LessWrongy (i.e., zombieless) before 1980.
When you say things like “More machine learning, more physics, more game theory, more math”, what I hear is, “more of anything that’s not philosophy”.
For example, Machine Learning alone is a topic whose understanding requires a semi-decent grounding in math, computer science, and practical programming. That’s at least a year of study for someone with an IQ over 150, and probably something like three or four years for the rest of us. And that’s just one topic; you list others as well. It sounds like you want us to just stop doing philosophy altogether, and stick to the more useful stuff.
Imagine people who are trying to write books, without knowing the alphabet. They keep trying for ages, but produce nothing that other person could unambiguously read.
So someone comes and says: “You should learn alphabet first.”
And they respond: “We are interested in writing books, not learning alphabet. The more time we spend learning alphabet, the less time we will have for actually writing books. We desire to become writers, not linguists.” (Famous writers are high status, linguistics is considered boring by most.)
Similarly it seems to me that many philosophers are too busy discussing deep topics about the world, so they don’t have time to actually study the world. To be fair, they do study a lot—but mostly the opinions of people who used the same strategy, decades and centuries ago. Knowing Plato’s opinions on X is higher status than knowing X.
This would be acceptable in situations where science does not know anything about X, so the expert’s opinion is the best we can have. But in many topics this simply isn’t true. Learning what we already know about X is the cost of ability to say something new and correct about X. The costs are higher than 2000 years ago, because the simple stuff is already known.
Mathematicians also cannot become famous today for discovering that a^2+b^2=c^2 in a right-angled triangle. They also have to study the simple stuff for years, before they are able to contribute something new. Computer programmers also cannot make billions by writing a new MS DOS, even if it were better than original. Neither do they get paid for quoting Dijkstra correctly. Philosophers need to work harder than centuries ago, too.
Are truth, meaning, beauty and goodness about the world? They are just not susceptible to straightforward empirical enquiry. People study Plato on the Good, because there aren’t good-ometers.
Beauty is about the world. More precisely, about humans. What makes humans perceive X as beautiful?
Required knowledge about the world: What happens in our brains? (Neuroscience, psychology, biology.) Do our beauty judgements change across cultures or centuries? (Sociology, anthropology, art history.) Do monkeys feel something similar? (Biology, ethology.)
It might prove helpful to look at humans etc. to understand the things that trigger the topic of beauty, in the sense that you might learn interesting related ideas in greater detail by studying these things. But the detailed conditions of triggering the topic are not necessarily among them, so “What makes humans perceive X as beautiful?” may be a less useful question than “What are some representative examples of things that are perceived by humans as beautiful?”. The world gives you detailed data for investigation, but you don’t necessarily care about the data, the ideas it suggests might make the original data irrelevant at some point.
Not in any sense that leadds to straightforward empiricism.
That knowledge about the world is necessary is not in doubt. The issue is whether it is sufficient.
We agree about the first sentence. And the knowledge about the world also helps to form a qualified opinion about the second one.
I have no problem with students of philosophy learning Plato’s opinions and the related science, if they want to write a book about Beauty. (I just imagine them more likely to do the former part and ignore the latter.)
A lot of this seems to be imagination-driven.
We imagine that our imagination has all the answers. In theory, theory and practice are the same; in practice they are not.
(If Plato is not at least a little bit a good-ometer, there is no point in studying Plato for that purpose either.)
So? Are you saying phil. has the right methodology, but is studying the wrong people?
I understand it as:
Either Plato invented some good-ometer, and then we should study the good-ometer regardless of Plato.
Or Plato (and his followers for recent 2000 years) failed at inventing some good-ometer, and then how is studying Plato helpful for our understanding of Good?
In other words, if a person X discovered Y, we should be able to teach Y without teaching about X. We don’t learn about life of Pythagoras to understand the Pythagorean theorem. We don’t have to read Turing’s book to understand computing. Etc. The knowledge was extracted, condensed, improved; if something was proven wrong, it was discarded. Why don’t philosophers process their data in the same way? Why is it always necessary to go back to the ancients?
It’s not always taught this way. Shelly Kagan, a philosophy professor at Yale, has a tendency to teach those Y’s without teaching about X first, which you can see since some of his courses are available online.
This is actually pretty standard in certain courses, like logic and ethics, where we have a better idea of what theories we want to teach. Actually, I learned fewer history/names in philosophical logic than in most math courses.
It’s also a subject of some controversy. For a while it was common to use textbooks that tended in the direction of teaching the ideas. But a lot of folks favor original works, especially since part of the point of Philosophy is being able to pick up a work by Plato and reason about it for yourself.
Philosophical progress is more about discovering errors than truths.
There is continuing doubt about exactly what the Greats were saying. The exegesis is ongoing.
Circularities in evaluating what is “good” or “right” answer to a question, since those are philosophical questions too.
It isn’t always necessary to go back to the Ancients; once you have got past 101, it is possible to have a career where you never mention Plato.
Philosophy isn’y broken science, it’s philosophy.
This isn’t a defense of philosophy as far as I can tell.
Why is this at all relevant aside from at a historical level? Who said an idea isn’t connected to whether the idea is true or not. If there are two different interpretations of what someone said, just label them differently and discuss accordingly.
If that’s the case, that raises the question of why we bother doing things that way in intro classes. In contrast for example, we discuss what the ancient Greeks did in math, because whatever area of math you go into, sometimes you are going to need some of their ideas.
This seems more like rhetoric than a coherent claim.
It was intended as an explanation. If you are contrasting it with science. which you think does positively confrm theories, you need to disprove Popper.
Becuase philosphy is about what philosphers have thought. Why would you think it is irrelevant? Because you think phil. is or should be some sort of technical discipline.?
Huh?
That has happened several times. Phils. really aren’t hoplessly dumb.
I’ve answered that elsewhere. You need them to set subsequent thinking in context. But that can become undiscussed background information. I notice you have no objection to the way physics is taught, which invariably starts with a lot of classical physics, even though it is “wrong”.
I’m serious. All the criticism of phil. is coming from people who expect it to be like science and work like science, and there is no reason it should.
I never said science positively confirms theories. But there are so many issues with Popper that that’s almost not worth discussing. We’ve had seventy from Popper at this point, and philosophy of science is one area where unambiguious progress has been made. I don’t need to point to something that modern like Bayesianism, but just the pretty effective criticisms of Popper by Quine, Lakatos and Kuhn. Falsification is good as a rough guideline, but problems like the theory-laden nature of observations, and the fact that data can say something quite complicated about hypotheses, and other issues all make Popper not a sound basis for science either a descriptive or proscriptive level.
And if you are trying to argue that philosophy functions in a Popperian fashion, then the obvious question is why once it discovers the errors doesn’t it just leave the error filled arguments alone?
So this may be in part a dispute over definitions. But simply put, it isn’t at all clear why X should include “history of X” and in general it doesn’t. Math, psychology, physics, medicine, art, linguistics, music, all distinguish between X and history of X. Note that many of the subjects on my list are not sciences, so any claimed distinction between science and philosophy isn’t relevant here.
This should be clear: The truth value of a claim isn’t connected to who espoused the claim. Even if Terry Tao says that 1+1=3, it doesn’t make it more true. And the same applies for philosophers. Whether a claim was made by Plato or by Joey the bartender doesn’t make the statement more or less true. (It is possible that it can provide weak heuristic usefulness about a claim being likely to be true.)
Right, and that’s part of the problem in a nutshell, that the reasonable word to use here is “several” and not, “frequently” or even “every time this question comes up.”
The problem though isn’t teaching background. If you look elsewhere in this thread you’ll see that I’ve argued for a much more limited form of Luke’s thesis, deemphasizing the classical sources in philosophy more but not eliminating. The equivalent for physics would be if before one did Newton one had a semester on Aristotle, Ptolemy, Aristarchus, Oresme, etc. And if we still saw journal articles in top tier journals discussing Aristotle’s physics.
It isn’t just the sciences that make progress. Math makes progress, and it is far closer in its goals to philosophy than science. Linguistics is a fuzzy border as is economics, yet both make real progress. If you prefer, consider these discussions to be about whether philosophy should act more like a science, and grapple with that question. You haven’t presented any argument why philosophy shouldn’t act more like the sciences other than claim that for a lot of philosophers the status quo is that it doesn’t.
It does. LP has been abandoned. So have many issues in Scholasticism.
True but irrelevant. Phil doens’t have to work like other subjects.
Philosophical claims are often subtle and need to be interpreted in context together with the rest of their originator’s body of work.
That’s an opionion. How about putting forward some examples to show that pils. really are stupidly undersuing this manouvre.
That’s an opinion. It could do with being backed by detailed work showing that phils really are stupidly overmphasing the ancients. On the other hand, it is perhaps motivated by an excessive tendendy to equate phil. with science. In science it is uncontroversial that the old stuff is probably wrong.
I have put forward the argument that it does not deal with the same sorts of questions, so it is, to say the least, not obvious that scientific techniques would work as well as LW’s expect. if they can be shown to (as in experimental philosophy) I am happy with that. But Luke’s claims are much more sweeping than piecemeal improvement.
Well...yes. Meaning, Beauty, and Goodness are all squarely in the domain of neuroscience/psychology. Truth is in the domain of the sciences, and its sister Tautology is in Mathematics. A philosopher who wishes to say meaningful things about any of the above needs to be well versed in all these things.
Plato—by no fault of his own of course - wasn’t well versed in any of them, which is why his thinking feels so clumsy and child-like to modern thinkers.
And the fact that we remember Plato today, rather than many other ancient philosophers who were a lot...less wrong...is an accident of history.
I wonder what happened to Justification? I justified my claim that Good is not in the domain of science by pointing out that it is not empriically detectable, thar we don’t have good-ometers. You just gainsaid that, without offering a counterargument.
The world is complicated.
The only time I’ve ever read a vague four-word sentence that deserves an upvote. Such things tickle me.
Tickle tickle tickle tickle!
Yeah, it sucks that you can’t do good philosophy without knowing a ton of other stuff, but that’s life. We don’t listen to electrical engineers when they complain about needing to know nitty-gritty calculus, and that’s a year of study for someone with an IQ over 150. Sometimes fields have prerequisites.
You could do good programming without knowing too much physics. You could probably do good physics without knowing too much machine learning, assuming you have someone in your department who does know machine learning. You could do good biology with chemistry alone, though that requires minimal physics, as well.
But lukeprog’s curriculum / reading list suggests that you can’t do good philosophy without knowing math, machine learning, physics, psychology, and a bunch of other subjects. If that is true, then virtually no one can do good philosophy at all, because absorbing all the prerequisites will take a large portion of most people’s lifetimes.
And we independently observe that almost no one can do good philosophy at all, so the theory checks out.
Nothing better than a hypothesis that makes correct empirical predictions!
Besides the sciences that Luke Mentioned, don’t forget people also need to learn the subsets of philosophy which actually are consistent and compatible with science. In the case of philosophy of mind, I began a list here: http://lesswrong.com/lw/58d/how_not_to_be_a_na%C3%AFve_computationalist/
What seems needed is a groups of creative 150IQ people willing to take the MegaCourse and create good philosophy as fast as possible, so we can use it for whatever purposes. Probably that group should, like the best intellectual groups examineds by Domenico de Masi in his “Creativity and Creative Groups”, get a place to be togheter, and work earnestly and honestly.
Finally, they must be sharp in avoiding biases, useless discussions, and counterfactual intuitions.
This gets more likely every minute.…
It doesn’t really take that long to learn things. But good philosophy already looks like this—my favorite political philosophy professor threw out references to computing, physics, history, etc. assuming students would get the references or look them up. Much like pride is the crown of the virtues, philosophy should be the crown of the sciences.
Yes, some subjects are just hard. But there are limits to this. How much one needs is a function of how much one wants to focus on a particular subject. So for example most physicists probably need three semesters of calc, linear algebra, and stats, at minimum. But only some of the physicists will need group theory, while others will need additional stats, and others will need differential geometry. But almost no physicist will need all of these things. Similarly, some degree of specialization may make sense if one wants to do philosophy.
That’s in fact already the case: the moral philosopher has a read a lot more about the history of moral philosophy, and same for the person studying epistemology, or other basic aspects of things. So to some extent the issue isn’t the amount of learning that is required, but a disagreement with what is required, and how cross-disciplinary it should be.
Correction: you don’t. Those of us who teach EEs (really, any class of engineers), do.
Sure, but the curriculum doesn’t actually change in response to engineering students complaining about the difficulty of their calculus classes. That’s because the stuff in those classes actually applies, in easy-to-see ways. There’s almost a 1:1 match between the sylllabi of engineering math classes and the math that engineering classes end up needing. (This is not a coincidence.)
This is not correct. Compare a vector calculus book from fifty years ago with the relevant sections of Stewart.
Agreed!
What? Surely lots of electrical engineers have IQ less than 150 (the average being approximately 126 ETA: actually that’s the average for EE PhD student, but still). How did they pass their calculus courses?
I assume they meant that an EE with IQ > 150 would require a year; many places distribute their calculus courses over two years, and some students require longer.
Propositional and predicate calculus is routinely taught in undergraduate philosophy programs. Does taking the time to acquire such skills make people ‘less philosophical’? Bugmaster, it sounds like you’re buying into the meme that true philosophy must avoid being too rigorous; if a paper consists mostly of equations or formalized proofs, it’s somehow less philosophical even if contentwise it’s nothing but an exegesis of Kant. This deep error is responsible not only for a lot of the philosophical laziness lukeprog takes issue with, but also for our conception of philosophical fields like metaphysics as being clearly distinguishable from theoretical physics, or of philosophy of mind as being clearly distinguishable from theoretical neuroscience. Define your academic fields however you otherwise want, but don’t define them in terms of how careful they’re allowed to become!
My comment wasn’t about philosophy, but about all those other topics: math, physics, machine learning, etc. They are very rigorous, and will take a lot of time to understand properly, even at an undergraduate level. There are only so many hours in the day; and while you are sitting there debugging your linked list code or whatever, you’re not doing philosophy.
My point is that if students do as lukeprog suggests, and study all those other topics first, they won’t have any time left for philosophy at all—assuming, of course, that they actually try to understand the material, not just memorize a few key points.
We certainly don’t start science students off with the latest and greatest science, because there’s a boatload of other science they have to study before it’ll do them any good. In practice,almost everything we teach undergrads in hard science fields is pre-1980, because of the amount of time it takes to get a student up to speed with where the frontier of the field had progressed to by 1980.
Point, but:
1) The science that lukeprog is concerned with comes from subfields that are substantially younger than most major fields in the hard sciences, e.g. the heuristics and biases program is much younger than Newtonian mechanics.
2) Old hard science at least has the benefit of working within a certain domain, e.g. Newtonian mechanics is valuable because it is still applicable to macroscopic objects moving slowly, and any future theory of physics is constrained by having to reduce to Newtonian mechanics in certain limits. The older results in the science that lukeprog is concerned with are misleading at best and dangerously wrong at worst.
In other words, I think what lukeprog is advocating is less analogous to teaching undergraduates about string theory before Newtonian mechanics now and more analogous to teaching undergraduates about thermodynamics before phlogiston theory in the 1800s (edit: and I see JoshuaZ made this point already).
Correct.
Could I have some examples of things being taught on philoosphy courses which are definitely known to be wrong.
Knowledge is justified true belief. /
There is something inside your mind, called a “Propositional Attitude” which has a truth value regardless of the world around you. The truth value sits on your mind. /
Now for the primitive ones (these are extremely relaxed, and angry, descriptions/summaries): Man is naturally good, but rich people made a contract which started a bad nature. Rousseau /
Man is naturally mean, but an abstract entity made itself as of the creation of a social contract (implicit or explicit) and that is what prevents evil from spreading. Hobbes. /
Angels are separated by 72 Kilometers each in the heavens. Aquinas. /
Brace yourselves for this one: That of which nothing greater can be thought is smaller than not that of which nothing greater can be thought, thus the latter can’t exist (don’t ask for what sense of smaller), because it can’t exist, there is one thing that is that of which nothing grater can be thought, and since its negation can’t be thought, all its properties must be positive, and it is God, because it is great and undeniable. Anselm. (ok, I grant that I forgot the bulk of his original since last reading it in 2007… but it is along these lines)
EDIT: Just to clarify before people read Robb below, I’ll put in my disclaimer about his response: RoBB Your philosophers descend from the anglophone tradition of philosophy, and were trained in analytic. The ones I mention descend from french tradition, are fond of structural readings, and do in fact state all those as fact.
Ever since Gettier’s 1963 paper, this has not been taught except as a useful extremely close approximation of the correct definition. Since philosophers (and some linguists) are the ones who have been criticizing this heuristic, and since their criticisms concern very special cases of ‘epistemic luck,’ this is a doubly misleading charge. The standards philosophers are adopting when they doubt that knowledge is justified true belief are actually, in most contexts and for most everyday purposes, unreasonably high; dictionaries and classrooms almost invariably define terms in more approximate, inexact, and exception-allowing ways than do philosophers. (Indeed, this is why philosophers are often criticized for being too precise and ‘nitpicky’ in their terminological distinctions.)
I agree philosophers take the reality of propositions too seriously. However, mathematicians do precisely the same thing. In both cases it’s not that the doctrine is “definitely known to be wrong;” it’s that there’s no good reason to affirm causally inert abstracta.
This is not Rousseau’s view, and is not generally taught as fact by philosophers.
This is not Hobbes’ view, and is not generally taught as fact by philosophers.
This is theology, not philosophy. And even if you deem it philosophy, it’s not generally taught as fact by philosophers. (Including Christian philosophers.)
That is not Anselm’s argument, and Anselm’s actual argument is not considered by philosophers (even Christian philosophers) to be sound, as originally formulated.
Quite It is not clear that JTB is jut plumb wrong, post Gettier, and in case it is hardly a charge against philosophy, when philosophy noticed the problem. If Diego has a better answer , I would like to hear it. Science types like to announce that “knowledge is information”, but that is inferior to JTB, because the requirement for truth has gone missing.
I don’t particularly. Think this could be a case of taking a /facon de parler/ as an ontological commitment, cf PWs.
I believe Platinga has tried to revive it, but that is considered a novelty.
It’s true that some people treat ‘possible worlds’ and/or ‘propositions’ as mere eliminable manners of speech. But a lot of prominent philosophers also treat one or the other as metaphysically deep and important, as a base for reducing other things to a unified foundation rather than as a thing to be reduced in its own right. Philosophy as a whole deserves at least some criticism for taking such views seriously, for the same reason mathematics deserves criticism for taking mathematical platonism seriously.
And to clarify, the target of my criticism isn’t modal realism; modal realism is a straw-man almost everywhere outside the pages of a David Lewis article. Modal realism is the doctrine that possibilia are concrete and real; I’m criticizing the doctrine that possibilia (or propositions, or mathematical entities...) are abstract and real.
Lots of people, from Descartes onward, have given variations on ‘ontological’ arguments. But that Anselm’s original argument is fallacious (specifically, equivocal) is beyond reasonable doubt.
whats wrong with that?
You are muddling propositions and propositional attitudes.
Not taught as facts. More likely to be taught as compare-and-contrast.
Absolutely definitley not taught as fact, and unlikely to be touched on outside of specialist mediaeval phil. courses.
INVARIABLY taught as somethign that was trashed by subsequent philosophers. Couldn’t be wronger.
Meta: Testing the use of enter This should be in a separate line / So should this. Now with a period.
EDIT: Please downvote this until −3 so people don’t have to read it’s useless meta-children offtopic.
You do realize that you can try out multiple things in Markup by editing the same comment several times, rather than by making multiple comments, right?
You may be interested in the “show help” button to the bottom right of the text box.
Indeed am :)
Also it only shows when the text is being edited.
Use two spaces after the line
See?
MM, let us test this
now this is the second line on another interpretation of what you meant this is already the fourth
and here should be the third or fifth.
Got it, thanks
double enter one space one enter one space one enter one space one enter double space
and finally double space enter
both double enter and double space enter work.
META: now does shift enter work if it does this is the third line
Upvoted for noticing something particularly relevant, yet nearly invisible.
So if I had to design an intro to philosophy/first-year philosophy course (and I will), at the moment I would do this:
There are four ideas in philosophy which stand above the others as ideas which have shaped our thinking and our civilization to the point where you’ve probably heard of these before you came to class: Plato’s theory of forms, Aristotle’s theory of causes, Descartes’ ‘Cogito ergo sum’, and Kant’s categorical imperative. The aim of this course is to understand what philosophy is, and why one should engage in it.
The course will discuss the writings of these four philosophers:
Plato- Selections from Plato’s Republic and Phaedo
Aristotle- Physics book I and II, and III.1, and De Anima II.1, 5.
Descartes- The Meditations
Kant- Groundwork for the Metaphysics of Morals
That’s my thought. I’m a frequent reader, and sometimes poster on Less Wrong. I’m also going to be teaching undergraduates philosophy in about two years, and right now my idea of how to go about doing this is very different from yours. I very much do not want to do a bad job, or hurt my students, so if I’m wrong, I should be convinced otherwise.
Maybe it wasn’t your purpose, but there’s no argument in this post. Please, please, please present an argument. You (or whoever wants to try) would be doing me a very great benefit by correcting me on this, if indeed I am wrong.
I think you need to focus in on what your goals are. Lukeprog’s idea of an intro to philosophy class sounds like a boot camp for aspiring professional philosopher-kings and intellectual revolutionaries. Yours sounds like a historical overview of the effects of a scattered set of ideological trends upon human culture. There isn’t any clear unifying content of your imagined course, as there would be if you focused, say, just on game-changing epistemological texts (like the much more engaging and well-written Berkeley in lieu of Aristotle) or just on game-changing meta-ethical ones.
On the other hand, if your goal is to make students think critically and rigorously about very deep issues, not just to expand their historical horizons, then you may want to choose more accessible secondary literature. John Perry’s A Dialogue on Personal Identity and Immortality is a superb candidate; a short, accessible dialogue packed with arguments much clearer and more human than those you’d find in a Platonic dialogue.
Good, excellent suggestion: much of this disagreement seems to come down to a disagreement over what a) the goals of philosophy are, and b) the goals of philosophical training are. So, if I had to state the goal of my course, I would say: the aim of this course is to understand what kinds of questions philosophy asks, and how we should approach those questions. That could use a lot more filling out, of course. And I don’t think those four ideas are scattered, ideological, or trends, but that’s not something you could have gotten from my description.
Anyway, what do you think is the right view on these two goals? What does philosophy aim to do, and what should training aim to achieve?
I think the most important thing an introductory philosophy class can do is to taboo philosophy. Ask instead: What can I get away with teaching a bunch of undergraduates under the umbrella term ‘philosophy’ that will be most useful to human beings (or specifically to the sorts of human beings who are likely to study ‘philosophy’), and that they are least likely to acquire by other means? Your goal shouldn’t be to make them understand what people tend to classify as ‘philosophy’ vs. ‘non-philosophy;’ it should be to maximize their ability to save the world and live fulfilling lives. I don’t think reading Berkeley need be unhelpful for saving the world; but it all really comes down to how you read Berkeley.
Okay, could I ask you for just a couple more details? It seems like a big moving part in your description is saving the world, and another is leading a fulfilling life. What do you mean by these things?
That’s a very large question, and my answer will depend on where you’re coming from and where you want to take this discussion. You probably have your own intuitive conception of where, in some general terms, you’d like the world to go. ‘Philosophy’ is a largely artificial, arbitrary, and unhelpful schema, and you owe it no fealty. So my main goal was not to persuade you to adopt my own vision of a happier and more rational world. It was to motivate you to reframe what teaching a ‘philosophy’ class is in a way that makes you more likely to exploit this opportunity to move the world infinitesimally closer to your own vision for the world.
If I were teaching an Intro to Philosophy class, I might break it down as follows:
Part 1: Destroy students’ complacence. Spend a few weeks methodically annihilating students’ barriers, prejudices, thought-terminating clichés, and safety nets. Don’t frame the discussion as ‘philosophy.’ Frame it as follows:
“OK, we’re trying to understand the world, and get what we want out of life. And we can’t just rely on authorities, common sense, or usual practice; those predictably fail. So we’ll need to reason our way to understanding the world. But our reasoning itself seems infirm. When we debate, we hit walls. Our ignorance corrodes our predictions. We let language and concepts confuse us. We don’t entertain enough possibilities, and we don’t weight them fairly. Paradox, ambiguity, and arbitrariness seems to threaten our human projects at every turn. Is it really possible for us to patch our buggy brains to any significant extent?”
The answer is Yes. But the best way to reach that conclusion is to test how much our own capacities can improve in practice. And the best test will be for us to take a few of the most fundamental riddles humans have devised, and see whether we can resolve or dissolve them by introducing more rigor and creativity to our thinking.
Part 2: Incrementally build students’ confidence back up. Spend about 3⁄5 of the course focusing very closely on one or two simple, readable, accessible, counter-intuitive analytic philosophy texts in epistemology/metaphysics (like Perry’s or Berkeley’s dialogues), teaching students that making progress in understanding and critically assessing good arguments requires rigor and patience, and, just as importantly, that they are capable of exercising the rigor and patience needed to make important progress on deep issues.
In other words, this part of the course is about trying very hard to impress students regarding the utility and value of carefully reasoning about very general questions — these issues are hard — without intimidating them into thinking they as individuals are ‘non-philosophers’ or ‘non-intellectuals,’ and without motivating them to despairingly or triumphantly regress to an ‘oh it’s all so mysterious’ relativism. It’s a precarious lesson to teach — making them skeptical enough, but not too skeptical! — but an indispensable one. And the best way to teach it is by concretely empowering them to think better, and letting them see the results for themselves. Acquaint students with a variety of tricks and techniques for analyzing and evaluating arguments, including deductive logic, Bayesian empiricism, semantics, and pragmatics.
Part 3: Make students put it all into practice. Coming up for air from these deep metaphysical and epistemological waters, spend the last 3-4 weeks talking about how to use these philosophical doctrines and techniques in daily life. I’m imagining something in between a CFAR course and a whirlwind tour of existentialism. This will engage and inspire students who are a bit more continental than analytic in temperament, while reiterating that the same very careful techniques of reasoning can be applied (a) to everyday life-decisions, and (b) to even more abstract and difficult riddles than might initially have seemed possible. Ideally, the pragmatism and humanism of this part of the course should also help finish disenchanting any remaining relativists, positivists, and hyper-skeptics in the class. (Or is it re-enchanting?)
How’s that sound to you?
2 questions:
How do I sign up?
Who do I give my money to?
It sounds like an abridged Eightfold Path.
First of all, thanks for putting so much time and thought into your reply. Your class plan is well thought out and interesting, but I think you and I are standing on opposite sides of a substantial inferential revine. So far as I could tell, you answer to my two questions were the following:
1) What would it mean to save the world? To bring about, however incrementally, my vision for the world.
2) What does it mean to live a fulfilling life? To get what one wants.
These two answers seems to place very great confidence in my (or my students) vision for the world, and my (or my students) desires. I take it, however, that one of the more serious philosophical questions we should be discussing is what our vision for the world should be, and what we should want. So by saying you don’t want to persuade me to adopt your own vision for the world, etc. it seems to me you skipped the most important part of the question. That’s exactly what I’d need to know in order to structure a course well.
Otherwise, if I teach my students to be more effective at getting what they want and bringing about their vision, while what they want is harmful and their vision is terrible, then I’ll be doing them and everyone else a great deal of harm.
That’s not the meaning of ‘save the world.’ I just took it for granted that the preservation of human-like things would probably be part of your vision.
Better: To get what one would most want, given perfect knowledge, computational capacities, and reasoning skills. (At least, this would be closer to the optimally fulfilling life.)
We’re humans. We don’t have anyone to appeal to but ourselves and each other.
Sure. Though you can read a fair amount of that out of what I did tell you about course layout.
There are two questions here. First, are people’s most profound and reflective goals in the end perverse and destructive? If so, then humanity may do better if kept in ignorance than if enlightened.
Second, can we teach people to re-evaluate and improve their values? Their current vision may be ‘terrible,’ but part of teaching people to understand how to attain their values is teaching people how to recognize, assess, and revise their values. This is an essential component of Part 3 of the course structure.
Acting may be very dangerous. But doing nothing is far more dangerous.
No, I agree with you that there is a right thing to want, and a right vision of the world, and that we can by learning at least some closer to understanding and realizing these things. This last post was helpful, and I see that we disagree less than I thought we did. Really, I think the only substantial difference between our two course designs is selection of texts, and that I think part 2 should be a larger part of the course, and should focus more directly on the question of what is right, what there is, etc. (incidentally, I only have 10 weeks, with two meetings per week to work with). Aside from ethics (which we learn in order to be better people), philosophy is in general not a means to an end, so I don’t think there’s as much a question of application.
10 weeks is pretty short! Sounds like a good challenge. I was assuming 16 weeks while trying to lay out a simple curriculum last night, and I got the following structure:
I. The Problem of Doubt
II. The Problem of Death
III. The Problem of Life
He could certainly teach you how to take theories to their logical conclusions no matter how bizarre they seem...
Really? I interpret this as a cheap way for Descartes to attack the status of people who don’t spend as much time thinking as he did (“Look at those boring merchants. They spend all their time buying and selling stuff, they hardly ever do any real thinking—they might not even exist”). Obviously this trick was useful to subsequent philosophers as well, so they rallied to his banner.
The context in which the cogito occurs makes this interpretation extremely implausible, in my opinion.
Bracketing the historical and phenomenological silliness, this also comes perilously close to the fallacy of denying the antecedent.
You should take my class, then! We could discuss whether or not this view is right, in light of what Descartes says in the Meditations.
Or one could conclude that since EY’s post is so lacking in clarity and argument, to the level that would be expected of a phil. student, he just isn’t in any position to say phil. is crap. It is succesfully teaching skills he hasn’t got and needs.
Do you mean Luke, or are you referring to some other Yudkowsky post?
I’d like to be explicit that I disagree with you, though this is implied in the grandparent. I think Luke, and EY, have very good philosophical heads on their shoulders. I also don’t think there’s such thing as philosophical expertise (contra Luke, I admit), so Luke’s admitted lack of traditional philosophical training doesn’t bother me (and shouldn’t bother you).
I don’t care about philosophical training, (having had none myself). I do care about clarity and good argumentation.
I agree. So would my mentor, and he’s said so explicitly: “I am an expert at logic, but I don’t believe anyone is an expert at Philosophy.”
Minor nitpick:
Those aren’t the world’s top 5 philosophy departments, those are the top 5 for the United States. In the rankings for the English-speaking world, Oxford is #2 after NYU (after that the rankings are the same until you get down to ANU and Toronto, which are tied with a bunch of other schools for #15).
Furthermore, the Philosophical Gourmet Report doesn’t try to compare English-speaking and non-English-speaking universities.
Fixed.
In fact, you might be interested to know that Oxford offers its undergraduate philosophy largely in the form of Physics/Philosophy, Maths/Philosophy, and CS/Philosophy joint-honours degrees. These provide the mathematical/scientific prerequisites you mentioned while introducing the philosophical issues associated with them.
Even that’s not quite right. There is a tie for 5th place between Harvard and Pitt. The fact that Harvard is listed before Pitt appears to be due to lexicographical order.
Playing Devil’s Advocate...
As Eliezer has argued, it would be greatly beneficial if science were kept secret. It would be wonderful if students had the opportunity to make scientific discoveries on their own, and being trained to think that way would greatly advance the rate of scientific progress. Making a scientific breakthrough would be something a practicing scientist would be used to, rather than something that happens once a generation, and so it would happen more reliably. Rather than having science textbooks, students could start with old (wrong) science textbooks or just looking at the world, and they’d have to make all their own mistakes along the way to see what making a breakthrough really involves.
This is how Philosophy is already taught! While many philosophers have opinions on what Philosophical questions have already been settled, they do not put forth their opinions straightforwardly to undergrads. Rather, students are expected to read the original works and figure out for themselves what’s wrong with them.
For example, students might learn about the debate between Realism and Nominalism, and then be expected to write a paper about which one they think is correct (or neither). Sure, we could just tell them the entire debate was confused, but then we won’t be training future philosophers in the same way we would like to train future scientists. The students should be able to work out for themselves what the problems were, so that they will be able to make philosophical breakthroughs in the future.
While a nice idea, it’s hardly workable. There are roughly two types of science consumers: researchers and users. The users do not care what’s under the hood, they just need working tools. Engineering is an example. Making them discover the Newton’s laws instead of teaching how to apply them to design stable bridges is a waste of time. Researchers build new tools and so have to understand how and why the existing tools work. This is a time-consuming process as it is (20+ years if you count all education levels including grad studies). Making people stumble through all the standard dead ends, while instructive, will likely make it so much longer. The current compromise is teaching some history of science while teaching science proper.
Indeed. And look where it led. The whole discipline appears largely useless to the outsiders, who hardly care what misinformed opinion some genius held 1000 years ago.
The current compromise isn’t working. A smidgen of history is taught, but usually in the mode of fact-memorization, not in the mode of exploration and discovery. The game method, whatever its value in philosophy, is certainly useful for scientists—it not only creates better (more dynamic, audacious, rigorous) thinkers in general, but also gives people a better sense of what science is and of why it is not ugly or dehumanizing. Teaching people arithmetic is of much greater value when successfully accompanied by a taught appreciation for and joy in arithmetic.
My recommendation: Ditch the ‘philosophy/science/history’ breakdown of courses, at least at the lower levels. If you’re trying to teach skills and good practices, you want to be able to draw on philosophical, scientific, and historical lessons and exercises as needed, rather than respecting the rather arbitrary academic divisions. Given low levels of long-term high school science class fact retention, there’s simply no excuse to not be incorporating ‘philosophical’ tricks (like those taught in the Sequences) and game-immersion at least as a mainstay of high school, whether or not we want to maintain that method at the higher levels.
And I don’t think this is only necessary for researchers. In some cases it’s even more important for users to be good scientists than for researchers to be, since our economic and political landscapes are shaped by the micro-decisions of the ‘users’.
Let me try to separate two different issues here, teaching science and teaching rational thought. The latter should indeed be taught better and to most people. The standard “critical thinking” curriculum is probably inadequate and largely out of date with the current leading edge, which is hardly surprising. Game immersion can be one of the tools used to teach this stuff. A successful student should then be able to apply their new rationality skills to their chosen vocation (and indeed to making a good choice of vocation), be it research or engineering, commerce or politics.
This is largely a typical mind fallacy. Plenty of people can find no joy in arithmetics, just like plenty of people find no joy in poetry, no matter how hard you make them.
Right, this is the new critical thinking curriculum part, unrelated to any particular science.
And here’s why I try not to separate those two issues: (1) Teaching science and teaching rational thought are largely interdependent. You can’t do one wholly without the other. (2) ‘Rational thought’ and ‘critical thinking’ don’t generally get their own curricula in schools. So we need to sneak them into science classrooms, math classrooms, philosophy classrooms, history classrooms—wherever we can. Reminding ourselves of the real-world intersectionality, fuzziness, and interdependence of these fields helps us feel better about this pragmatic decision by intellectually justifying it; but what matters most is the pragmatics. Our field divisions are tools.
The worry of typical-mind errors looms large on any generalized account, including a pessimistic one. To help combat that, I’ll make my background explicit. I largely had no interest in mathematical reasoning in primary and secondary schools; hence when I acquired that interest as a result of more engaging, imaginative, and ‘adventurey’ approaches to teaching and thinking, I concluded that there were probably lots of other students for whom mathematics could have been taught in a much more useful, personally involving way.
Perhaps those ‘lots of others’ are still a minority; no data exists specifically on how many people would acquire a love of arithmetic from a Perfectly Optimized Arithmetic course. But I’m inclined to think that underestimating people’s potential to become better lay-scientists, lay-mathematicians, and lay-philosophers at this stage has greater potential costs than overestimating it.
Are you using some definition of “working” narrow enough to exclude all the stable bridges, faster microchips, mathematical proofs &c. being produced by people who were taught the current compromise?
Yes, I am. If science education is working, then most students who take a science class should see a subsequent measurable long-term increase in scientific literacy, critical thinking skills, and general understanding. Our current way of teaching history may be having no positive effect even on our bridge-building, microchip-designing capacities. History as it’s currently taught is if anything a distraction from those elements that are producing technological progress.
Yes, absolutely. As shminux points out below, it isn’t practical to expect students to (re-)make real scientific discoveries during their training, but that doesn’t mean that we can’t game-ify scientific training using a simpler universe wherein novel discoveries are a lot closer at hand.
Well the simplest version of this is to do something like play Zendo), but that has a variety of problems, such as the fact that rule sets often connect more to human psychology than anything else.
This would require a larger proportion of philosophy professors to admit that the debate is confused.
Philosophy isn’t science and it isn’t religion either.
News to me. What’s the right answer then?
As is usually the case for a confused question, the answer is dissolving the question. Why do we care whether categories exist? If this is a question about the meanings of words, that’s really just an empirical question about their usage. If it’s a question of whether “cars” forms a meaningful cluster in conceptspace, we have lots of different ways of addressing that question that entirely sidestep the Realism/Nominalism debate.
Of course, it’s hard to even pin down what people mean by Realism and Nominalism, so the above might not even be addressing the right confused question. As JS Mill noted, Nominalism when it was coined referred to the position that there are no universals other than names. But some see the debate as a continuation of the Plato/Aristotle debate about the existence of forms, while others see it as merely an irrelevant blip in the history of Medieval philosophy, preceded by the conflict between Materialism and Idealism and supplanted by more interesting conflicts such as Rationalism vs. Empiricism.
This sort of equivocation does not happen with key terms in a field that has its shit together.
And if it is about how a fundamental aspect of reality—identity and difference—works then we don’t. The debate about universals is ongoing with people like Roger Penrose and David Armstrong weighing in.
Uh huh. So what exactly are physical laws?
I largely agree with this answer. My view is that reductionist materialism implies that names are just a convenient way of discussing similar things, but there isn’t something that inherently makes what we label a “car”; it’s just an object made up of atoms that pattern matches what we term a “car.” I suppose that likely makes me lean toward nominalism, but I find the overall debate generally confused.
I’ve taken several philosophy courses, and I’m always astonished by the absence of agreement or justification that either side can posit. I think the biggest problem is that many philosophers make some assumption without sufficient justification and then create enormously complex systems based on those assumptions. But since they don’t argue for strenuous justification for the underlying premises (e.g. Platonic idealism), then ridiculous amounts of time ends up being wasted learning about all the systems, rather than figuring out how to test them for truth (or even avoiding analytical meaninglessness).
So...it isn’t science.
I’m even less clear on what a “meaningful cluster in conceptspace” means than I am on the traditional philosophical formulations of the problem. What would a “meaningless” cluster in conceptspace look like? Is there a single unique conceptspace, and how is it defined?
Good philosophers must beware of equivocation. Universal is an ambiguous term, so taboo it and distinguish several things it’s stood for:
predicate—term that can be applied repeatedly. (If ‘nominalism’ is meant to reduce all universals to predicates, then it’s an ill-conceived project, since it seems to be trying to explain commonality in general by reducing it to commonality between words; but if the latter is left unexplained, then commonality itself is left unexplained.)
common nature—something intrinsically possessed by all the entities that share a property. An abiding ‘essence,’ some kind of ‘quarkhood’ that inheres in all the quarks. Common natures are a posit to explain similarity. They are worldly, thus completely unlike Platonic Forms.
common cause—a single cause that has multiple effects. A Form acts as a common cause, but not a common nature, since on Plato’s view they causally produce the recurrence of nature’s patterns ‘from the outside.’ (In some ways, they’re an anthropocentric precursor to Conway’s Game of Life.) Like common natures, common causes can be posited with the intent of explaining why our universe exhibits similarity. If the question ‘Why do properties recur at all?’ or ‘Why are some characteristics of the world the same as each other?’ is well-formed, then there is nothing mysterious or ill-conceived about these posits, though they may perhaps by theoretically unnecessary, unenlightening, or ad-hoc.
True. Why is this relevant?
If you expect it to work like science, it will look like bad science. That is what most of the “philosophy is bad” criticism boils down to.
I’m confused. The previous exchange was:
How does arguing that philosophy isn’t science or religion help here?
It helps understanding why about 99% of the criticism of philsophy on LW is misbegoten.
That was utterly irrelevant in context. The comment you were responding to was praise of current methods in philosophical pedagogy, and you don’t seem to be disagreeing with it. You were alluding to a tangential point—I don’t know why you would expect anyone to know what you were thinking.
The question in reply is to how it not being science or religion has do to with putting opinions straightforwardly to undergraduates. How are they connected?
r/philosophy is not amused by this:
http://www.reddit.com/r/philosophy/comments/14e815/train_philosophers_with_pearl_and_kahneman_not/
I honestly have no idea which, if any, of the reddit philosphers are trolling. It’s highly entertaining reading, though.
I hate that sub. I was subbed for like a week before I realized that it was always awful like that.
As a philosophy student with a great interest in math and computing, I can definitely attest to the lack of scientific understanding in my department. Worse, it often seems like some professors actively encourage an anti-scientific ideology. I’m wondering if anybody has any practical ideas on how to converse with students and professors [who are not supportive or knowledgeable of the rationalist and Bayesian world-view] in a positive and engaging way.
You could introduce some of your friends into LessWrong topics by labeling them as “philosophy”. (Start with the articles that don’t explicitly criticize the current state of philosophy, obviously.)
The label seems credible—some of my friends, when I sent them a link to LW, replied that it seems to be a website about philosophy. And when a person already has “studying philosophy” as part of their self-concept, they may be more likely to agree to look at something labeled as “philosophical”.
Perhaps you could just taboo “science” and describe scientists as a weird branch of philosophers—philosophers who try to test their ideas experimentally, because this is what their weird philosophy tells them to do. Now learning about such weird philosophy would be interesting, wouldn’t it?
Tabooing the word “science”seems to be a pretty good idea, along with other scientific jargon. I think many of the idealist and continental philosophy students are not afraid of science exactly, but fear that it somehow makes the human condition worse; more mechanical, and less special.
Thanks
Well, there’s also the various concerns about research programs — the social institutions of science that direct which knowledge is found. Consider the following argument:
In the 20th century, a lot of research effort and funding was spent on discovering what objective properties of the world might be useful to know in order to blow people up more effectively (atomic physics, e.g.), order them around inhumanely (behaviorism), control their wants and desires (advertising and propaganda), and so forth. There are presumably also objective properties of the world that would be useful to know in order to make peace and prosperity for all — and these also can be empirically investigated; but the goal of discovering them is not as good of a source of funding as those other ones; and so they are by and large not the subject of institutional science.
First, make sure that they’re actually approachable at all.
Second, don’t approach them in a combative fashion, like this post does. You need to approach them by understanding their specific view of morality and epistemology and their view of how philosophy relates to that, and how it should relate to it, or even if they think it does or should at all. Approach them from a perspective that is explicitly open to change. Ask lots of questions, then ask follow up questions. These questions shouldn’t be combative, although they should probably expose assumptions that are at least seemingly questionable.
Third, make sure you know what you’re getting into yourself. Some of those guys are very smart, and they have a lot more experience than you do. Do your homework.
I’m trying to think what I would do. I don’t know how I’d go about creating the groundwork for the conversation or selecting the person with whom I would converse. But here’s an outline of how I think the conversation might go.
Me: What do you believe about epistemology?
Them: I believe X.
Me: I believe that empiricism works, even if I don’t know why it works. I believe that if something is useful that’s sufficient to justify believing in it, at least up to the point where it stops being useful. This is because I think changing one’s epistemology only makes sense if it’s motivated by one’s values since truth is not necessarily an end in itself.
I think X is problematic because it ignores Y and assumes Z. Z is a case of bad science, and most scientists don’t Z.
What do you believe about morality?
Them: I believe A.
Me: I believe that morality is a guide to human behavior that seeks to discriminate between right and wrong behavior. However, I don’t believe that a moral system is necessarily objective in the traditional sense. I think that morality has to do with individual values and desires since desires are the only form of inherently motivational facts and are thus the key link between epistemic truth and moral guidance. I think individuals should pursue their values, although I often get confused when those values contradict.
I sort of believe A, in that _. But I disagree with A because X.
What do you think philosophy is and ought to be, if anything?
Them: Q.
Me: Honestly, I don’t know or particularly care about the definitions of words because I’m mainly only interested in things that achieve my values. But, I think that philosophy, whatever its specific definition, ought to be aimed towards the purpose of clarifying morality and epistemology because I think that would be a useful step towards achieving my individual values.
Thank you very much Chaos. I did not realize that my post came off as abrasive, I appreciate you pointing that out. Your example sounds quite reasonable and is more along the lines of what I was looking for.
Your post didn’t come across as abrasive, Luke’s did. Sorry for my bad communication.
Based on the previous paragraphs, this should probably end with “because ~X.”
I didn’t have any specific format in mind, but you’d be right otherwise.
The things on your curriculum don’t seem like philosophy at all in the contemporary sense of the word. They are certainly very valuable at figuring out the answers to concrete questions within their particular domains. But they are less useful for understanding broader questions about the domains themselves or the appropriateness of the questions. Learning formal logic, for example, isn’t that much help in understanding what logic is. Likewise, knowing how people make moral decisions is not at all the same as knowing what the moral thing to do would be. I gather your point is that it’s only certain concrete questions that have any real meaning.
This naive logical positivism is dismaying in a blog about rationality. I certainly agree that there is plenty of garbage philosophy, and that most of Aristotle’s scientific claims were wrong. But the problem with logical positivism is that its claim about what’s meaningful and what isn’t fails to be a meaningful claim under its own criteria.
Your dismissal of certain types of philosophy inevitably rests on particular implicit answers to the kinds of philosophical questions you dismiss as worthless (like what makes a philosophical idea wrong?). Dismissing those questions—failing to think through the assumptions on which your viewpoint rests—only guarantees that your answers to those questions will be pretty bad. And that’s something that you could learn from a careful reading of Plato.
It certainly doesn’t hurt! Learning formal logic gives you data with which to test meta-logical theories. Moreover, learning formal logic helps in understanding everything; and logic is one of the things, so, there ya go. Instantiate at will.
Sure. But for practical purposes (and yes, there are practical philosophical purposes), you can’t be successful in either goal without some measure of success in both.
Where does lukeprog say that? And by ‘meaning’ do you mean importance, or do you mean semantic content?
Lukeprog and Eliezer are not logical positivists in the relevant sense. And although logical positivism is silly, it’s not silly for obvious reasons like ‘it’s self-refuting;’ it isn’t self-refuting. The methodology of logical positivism is asserted by positivists as an imperative, not as a truth-apt description of anything.
In some cases, yes. But why do you think lukeprog is dismissing those questions? He wrote, “I think many philosophical problems are important. But the field of philosophy doesn’t seem to be very good at answering them. What can we do? Why, come up with better philosophical methods, of course!” Lukeprog’s objection is to how people answer philosophical questions, more so than to the choice of questions themselves. (Though I’m sure there will be some disagreement on the latter point as well. Not all grammatical questions are well-formed.)
I think that logical positivism generally is self-refuting. It typically makes claims about what is meaningful that would be meaningless under its own standards. It generally also depends on an ideas about what counts as observable or analytically true that also are not defensible—again, under its own standards. It doesn’t change things to say formulate it as a methodological imperative. If the methodology of logical positivism is imperative, then on what grounds? Because other stuff seems silly?
I am obviously reading something into lukeprog’s post that may not be there. But the materials on his curriculum don’t seem very useful in answering a broad class of questions in what is normally considered philosophy. And when he’s mocking philosophy abstracts, he dismisses the value of thinking about what counts as knowledge. But if that’s not worthwhile, then, um, how does he know?
Let’s try to unpack what ‘self-refuting’ could mean here. Do you mean that logical positivism is inconsistent? If so, how? A meaningless statement is not truth-apt, so it can’t yield a contradiction. And you haven’t suggested that positivists assert ‘Non-empirical statements are meaningless’ is both meaningful and meaningless. What, precisely, is wrong with positivists asserting ‘Non-empirical statements are meaningless,’ and asserting that the previous sentence is meaningless as well? You’re framing it as an internal problem, but the more obvious and compelling problems are all external. (I.e.: Their theory of meaning is coherent and intelligible, at the very least from an outsider’s perspective; it just isn’t remotely plausible.)
Here I agree, except ‘under its own standards’ isn’t doing any important work. Logical positivism’s views are not inconsistent; they’re just silly and unmotivated. There is no reason for us to adopt its standards in the first place.
Speaking for myself, I think it’s very important for us to unpack what we mean by epistemic justification (as opposed to moral and other forms of justification). For instance, it’s very difficult to understand ‘rationality’ without an understanding of the normative dimension of ‘knowledge.’ But the words ‘knowledge’ and ‘justification’ themselves aren’t magical. If we need to taboo them away for purposes of rigorous philosophy, then re-introduce them only for pragmatic/rhetorical purposes in persuading laypeople, that’s fine. The traditional philosophical way of framing the question, as ‘What is knowledge?‘, is unhelpful and confusing because it conflates the semantic question ‘What do we mean by the word “knowledge”?’ with the much deeper and more important questions beneath the surface.
Similarly, I think a lot of recent work in the metaphysics of causality unhelpfully conflates conceptual analysis with metaphysical hypothesizing; both are important topics (and important work may be done on either topic under lukeprog’s rubric), but if we confuse the two we lose most of the topics’ significance in a haze of equivocation.
I think that logical positivism generally is self-refuting. It typically makes claims about what is meaningful that would not satisfy that would be meaningless under its own standards. It generally also depends on an ideas about what counts as observable or analytically true that also are not defensible—again, under its own standards. It doesn’t change things to say formulate it as a methodological imperative. If the methodology of logical positivism is imperative, then on what grounds? Because other stuff seems silly?
I am obviously reading something into lukeprog’s post that may not be there. But the materials on his curriculum don’t seem very useful in answering a broad class of questions in what is normally considered philosophy. And when he’s mocking philosophy abstracts, he dismisses the value of thinking about what counts as knowledge. But if that’s not worthwhile, then, um, how does he know?
Reforming phil. and leaving it alone are not the only options. There is also the option of setting up a new cross-disciplinary subject parallel to Cognitive Science
Have you taken a math class in formal logic? (The one with models, proofs, soundness and completeness, Gödel’s Theorem, etc, not the ersatz philosophy-department one that thinks syllogisms are complicated.) I’d be surprised if you had, and still considered it irrelevant to doing philosophy well.
Quite. It is a perfectly coherent possibility that the moral instincts given to us by evolution are broken in some way, so that studying morlaity form the evolutionary perspective does’t resolve the “what is the right thing to do” question at all. The interesting thing here is that a lot of material on LW is dedicated to an exactly parallel with argument about ratioanlity: our rationality is broken and needs to be fixed. How can EY be so open to the one possibility and so oblivious to the other?
What do you mean by “broken”, here?
About the same as when I said rationality is borken, according to EY.
Our rationality has an obvious standard to compare it to: the real world. If we consistently make the wrong predictions, it’s easy to see something is wrong. What can you compare morality to but itself?
I suspect I’m missing something here.
Pre-supposing Moral Realism gives one a clear standard by which to judge whether one’s actions are moral or immoral. A tendency to consistently make wrong predictions about whether an action is moral or immoral would mean that our moral compass is “broken.”
Of course… Pre-supposing Moral Realism is silly, so there’s that.
No, it doesn’t. If your ethics conflicted with Morality, how on earth would you tell?
That would depend on exactly what kind of Moral Realism you espouse. If you’re Kantian, you think reason will tell you whether your actions are “really” wrong or right. If you’re a Divine Command Theist, you think God can tell you whether your actions are “really” wrong or right. If you’re a Contractarian, you think the Social Contract can tell you whether your actions are “really” wrong or right...
And so on, and so forth.
As I’ve said, I think Moral Realism of this kind is silly, but if it happens to be true then what you think you “ought” to do and what you actually “ought” to do could be two different things.
Oh. Right. Yes. I’m an idiot.
Hmm.
Well, if they think they can prove it, any moral realists are welcome to post their reasoning here, and if they turn out to be right I can’t see any objection to posting on the implications. That said, I suspect that many (all?) forms of moral realism come not from mistakes of fact but confusion, and have a good chance of being dissolved by the sequences.
Isn’t EY a moral realist?
Let’s define our terms. Moral realism is a conjunction of three claims:
(i) Claims of the form “x is im/moral” assert facts/propositions.
(ii) Claims of the form “x is im/moral” are true iff the relevant fact obtains.
(iii) At least one claim of the form “x is im/moral” is true.
This should be distinguished from moral non-naturalism (which asserts that the moral facts are somehow transcendent or abstract or nonphysical), moral universalism (which asserts that a single set of moral truths holds for everyone), and moral primitivism (which asserts that moral concepts are primitive, metaphysically basic, and/or conceptually irreducible).
I don’t see how those three exclude Moral Non-Naturalism. Certainly, the majority of divisions I’ve seen have put MN-N as a form of Moral Realism...
I think Robb’s intention was to say that moral non-naturalism, universalism, and primitivism are all species of the moral-realist genus, but that one can be a moral realist without being any of those three (as EY is, I believe).
Could be. Re-reading the comment hasn’t helped me clear up my confusion, so maybe RobBB can clarify this for us.
My intent was just to highlight that realism, non-naturalism, universalism, and primitivism are different ideas. I wasn’t weighing in on their relationship, beyond their non-identity. Universalism and primitivism, for instance, I’d usually consider compatible with an error theory of morality (and thus with anti-realism): Moral statements are semantically irreducible or structurally applicable to everyone, but fail to meet their truth-conditions. Similarly, I could imagine people committed to anti-realism precisely because moral facts would have to be non-natural. We may not want to call the latter view ‘moral non-naturalism,’ though.
Well put, thank you.
Down to definitions. He no longer believes that there is some “higher good” beyond mere human ethics.
Okay, but that’s orthogonal to the question of moral realism.
(That’s what I meant by moral realism.)
And more importantly: why the ** (excuse the language) would you care.
If what I truly desire upon reflection is objectively “evil”, I want to believe that what I truly desire upon reflection is objectively “evil”. And tautologically, I will still truly desire it.
Some folks have used the idea of “moral observations” to address this. Basically, if you see your neighbor’s child light a dog on fire, and you say “I saw your child doing something wrong”, you’re making a coherent statement about your observation of reality. Our moral observations can be distorted / hallucinated just like other observations, but then that is only as much of a barrier to understanding moral reality as it is to understanding physical reality.
Oh, obviously. I was saying that it would be hard to observe morality except in the usual way; it has since been pointed out that most forms of moral realism come with such a method; praying, for example.
In the sense that pre-supposing anything is silly?
Okay.
Our de facto reasoing is wrong. Either it is not leading to wrong predictions, or it is not easy to see something is wrong.
In any case, the world is not the only standard rationality can be compared to. We can spot the incoherence of bad rationality by theoretical investigation.
And yet a paperclipper has perfectly coherent preferences. Without direct access to some source-of-morality that somehow supersedes mere human ethics, how can we judge our morality except by it’s own standards? If you have such a source, it would make an excellent top-level post, of perhaps even a sequence.
But not coherent moral preferences. It doesn’t care if its paperclipping infinges on other’s preferences.
By coherence, and by its ability to actually be morality, which paperclipping isn’t.
Could you taboo “morality” for me, please? I suspect we are talking at cross-purposes.
You think paperclipping is morality?
As I said, I suspect we are using different definitions of “morality”; could we proceed without using the term?
It would have helped if you had said why you think we have differnt definitions. I don’t think I am asserting anything unsual (as far as the wider world is concerned) when I say morality is principallly about regulating interactions between people so that one persons actions take the interests of affected parties into account. Since, to me, that is a truism, it is hard for me to guess why anyone would demur. Other LWers have defined morality as decision theory, as something that just guides their actions, without necessarily taking others into account. I think that is clearly wrong because it suggests that a highly effective serial killer is “good”, since they are maximising their own value. But now I am struggling to guess something you could easily just tell me.
You stated that there was some way to determine the validity of our ethics—by which I meant the moral preferences humans hold, as distinct from whatever source may have given them to us, be tit prisoner’s dilemmas or the will of God—without recourse to those same ethical intuitions.
When challenged on this assertion, you stated that our preferences may be revealed as incoherent by logic; yet, as I pointed out, an agent’s preferences may be perfectly coherent without being anything we would regard as “right”.
So either there has been some misunderstanding, or … show us this mysterious method of determining the Rightness of something without recourse to our ethical intuitions.
I stated:
emphasis added. Your counterexample was paperclipping, which you say is coherent. My response was:
So you still need an example of coherent morlaity that is somehow readically different from ours, showing that coherent morality doens’t converge enought to be called objective (or at least EDIT: intersubjective).
May I refer you to the Chanur series, by C.J. Cherryh? Depicted in that series are several alien species along with alien modes of thought and alien moralities.
Consider for a moment the Kif. The Kif are a race of carnivores; they lack the internal wiring to appreciate emotions (as you and I understand the term “emotion”) and eat their food live (the notion of eating dead carrion disgusts them, no matter whether it’s been cooked in the meantime). Their terminal value is to maximise a quantity that they refer to as sfik, which has the following properties, among others:
If you die, then your sfik instantly goes to negative infinity. Personal survival is of massive value to the Kif. (Survival of others, incidentally, is entirely ignored).
Succeeding in any task demonstrates high sfik, with more sfik for more difficult tasks (and more for making it look easy, as compared to narrowly winning).
Being able to hold onto something that someone else wants shows greater sfik than the person trying to take it. Conversely, taking something that someone else holds shows greater sfik than the person holding it.
Since every Kif will do his utmost to protect his own life, killing another being shows a lot of sfik. This is proportional to the sfik of the person being killed (any high-status Kif is also a high-value target).
Of course, a high-sfik Kif gets to that position by being very hard to kill. He may even attract followers, on the basis that he will protect his followers from everybody else (anyone killing one of his followers takes sfik from him in the act, and so he will strive to avoid that). Note that a Kif leader does not promise to protect his followers from himself, and indeed will often kill a follower who has not proved useful (either directly, or by sending him on a suicide mission (and killing him directly if he does not go)).
Since they do not share human emotions, they do not grasp the concept of ‘friendship’. The closest translation in their language is “temporary-ally-of-convenience”.
It’s a radically different form of morality; murder is not considered a crime in Kifish society; but it’s also coherent (though more complex than paperclipping). I’m not entirely sure what you mean by “it’s ability to actually be morality”, though; that looks like a circular definition to me.
I think I am with Peterdjones on this: I don’t see how this can be called morality—it’s certainly a set of values, sfik-seeking is certainly something that motivates Kifs, and which they prefer to do; but to call it “morality” in the sense that humans recognize the word is no better than calling it “lust” (though it contains no sex) or “love” (though it contains no element of caring for others).
That it’s in their brains and motivates them isn’t enough to call it “morality” meaningfully. For it to be called “morality” meaningfully it has to motivate them in roughly the same manner that human morality motivates humans. Baby-eaters and Superhappies were motivated by morality, even if they were vastly different moralities. I don’t see anything in the Kifs that could be called morality.
So how does one distinguish a system of motivations from being a system of morality or not?
Well, I think one of the minimal elements required to identify something as morality, is that one tends to prefer other people to be moral, at least in general, at least when their immorality doesn’t help you directly.
Babyeaters wanted other people to eat babies, and superhappies wanted other people to..superhappy, but Kifs don’t seem to have any reason to encourage sfik-seeking in others.
Kif find sfik-seeking entities to be easier to predict, and therefore easier to control. Thus they prefer sfik-seeking behaviour in others, for purely practical reasons.
That’s an interesting distinction. So a paperclip maximizer would seem to be an entity with a moral system.
I’d certainly be more willing to assign that label to Clippy’s system than to Kifs’. Though perhaps Clippy is too well-adjusted—if its preferences are identical to its morality, instead of merely having its morality influence its preferences, that may still be slightly too different than the way morality motivates humans to be given that label.
I’d feel better calling Clippy’s clippyness “morality” if it occasionally made staples and then felt bad about it, or it atleast had a personal preference for yellow paperclips in its vicinity while allowing that paperclips of other colors are just as clippy.
I am not convinced that Kif morality is coherent. Looking at it game-theoretically:
There is a set of actions (A) that run a non-infinitessimal risk of infinite loss, ie anything that has a risk of losing ones life.
There is a set of actions (B) that have a non-infintiessimal chance of finite gain.
There is set of inactions that (C) that maintain an equilibrium.
Game theoretically, a Kif should avoid A, and avoid B if it entails A to any. Because losses are infinite and gains finite, it makes no sense to endanger ones life for any putative gain. (Because of the infinity in A, the relative likelihoods of loss and gain don’t matter). Kif should either avoid each other, or adopt pacifism (both versions of C). (Kif in fact have much more motivation to be pacifistic than humans). Kif Pacifism if is clearly not C J Cherryh’s intention.
The key to whole arguent is the “infinite”. Perhaps the “Infinite” is an exageration, or perhaps C J Cherryh is one of those people who thinks infinity is a large finite number. The intended results would follow in that case. Infinity is game changing.
Except they don’t have eachothers’ sourcecode, so even inaction or pacifist policies incur non-infinitesimal risk of infinite loss. There is, in fact, no set of actions which does not incur such a risk, due to free rider effects and multiplayer PD and all this tragedy of the commons going on in there.
I don’t think the “infinite” poses quite as much of a problem as it first seems, though it does make things kind of complicated.
I don’t see what sourcecode has to do with it. A Kif can (perhaps only) minimise the risk of infintite loss by keeping interaction to an absolute minimum, preferably zero. That is not mere “inaction”in the sense of standing quietly in a elevator full of other Kif. That is “Another Kif....RUN!!!”
I don’t see what free riding has to do with it either.
Running from other Kifs also (but less obviously) increases risk of -inf sfik. Cooperating or living socially with other kifs vastly increases your chances of survival in the wild, I presume. You run into a risk equilibrium problem (which can be reduced to raw math if you have enough data about the world and precise sfik values for Kif life and thing-ownership, though it does become very complicated maths (i.e. far more complex than I can solve) due to instrumental value effects and relational comparisons of -inf sfik risks) pretty quickly, but it does seem analytically solvable.
So Kifs have strong incentive to work together in some manner in order to avoid the high risks of death when going at it alone in the wilds, but also various recursive functions that compute sfik with some instrumental -inf sfik parameters plugging into the +fin sfik actions. Since this is apparently the case for most Kifs working together, but you have no guarantee that all Kifs can reliably precommit to cooperation (which is where source code factors in—if you could read the source code, you could see which ones can reliably precommit and under what conditions and how to enforce this), so there’s also some incentive to freeride on the Kif cooperation wave and gain a few sfiks by killing some other kifs and taking some of their things, so long as your ability to do so and the marginal gains from this (as well as the weighted instrumental parameter of -inf sfik risk reduction) outweigh the increase in -inf sfik that your higher status incur.
As an intuitive guess, Kifs are extremely cautious socially, but on average power structures and hierarchies of exponential rarity still form, as every Kif is aware that other Kifs have incentive to freeride and this means a risk that they get killed in the process of this free riding.
Vastly is infinitely less than infinity.
They don’t exactly sound like social animals to me. The ideas that Kifs need to associate with other Kifs, but risk infinitley negative disutulity from doing are not obviously compatible, to put it mildly. They are cat-like solitary predators writ large.
This isn’t what the math says. The math says:
Go alone: 35% chance of -inf sfik within the next year. Cooperate with other Kifs: X% chance of -inf sfik within the next year, with X being a function of the number of Kifs in the group and the chance of freeriders and the chance of being selected and so on.
So clearly for some situations and some numbers, i.e. for some functions, cooperating is superior to soloing.
Since all Kif end up dead, they all end up on minus infinity. The question then is why they ever bother doing anything particular inbetween.
Hyperbolic discounting heuristics, i.e. not valuing the state and utility of their distant future selves as much as their immediate self, perhaps in some manner which implements an asymptotic returns system for sufficiently distant selves, I would wager.
Granted, that’s just the easiest explanation that comes to mind. You’re correct that since this isn’t stated (AFAIK?) it’s a curious issue.
“Infinite” was probably an exaggeration on my part. Individual Kif certainly do all that they can to avoid death, don’t care much about their legacy, and are very surprised to find that other races have the strange concept of a ‘martyr’ - that sfik can count after death.
I have sometimes talked in terms of objective morality, because it seems truer than subjective morality. But morality is not really objective because it will vary with biology, and is of no use to sticks or stones. It is truer still to say that it is intersubjective. Humans are not going to adopt Kif morality because humans are not Kif.
My first thought on reading this is that a group of Kifish with human morality would eventually rule the world, if not actually wipe out the originals. (My second is “wait, this just a formalization of the standard Evil Bully Race”, isn’t it?” And my third is “how do they tell how much sfik others have for the purposes of killing them/ taking their stuff?)
Why? I’m curious as to the reasoning behind this.
Yes. The Kif are the villains through most of the series. (Note; most, not all. There are some individual Kif who, while still remaining sfik-maximisers, make significant efforts to aid the heroes and avert a major war).
Mainly reputation, along with a certain degree of body language. This is not perfect, however; a Kif more or less has as much sfik as he can persuade others that he has, until such time as he fails in some task (in which case his followers will immediately defect to the side of whoever defeated him. In fact, they will often defect as soon as it looks like he is starting to fail; your average Kifish footsoldier does not believe in glorious last stands).
It’s also worth noting that a Kif who elects to become the follower of a given leader is expected not to kill or take the stuff of the other followers of the same leader, and the leader will enforce this by killing troublemakers.
Cooperation and common goals. Are they portrayed as highly successful in the series?
The Kif, like the rationalist, plays to win, no matter what that takes. They will keep promises if they feel that a reputation for keeping promises will help them to win later. They are perfectly capable of seeing, and taking advantage of, common goals; they can and will make alliances with each other, and with members of other species, as and where necessary. They have their own mental biases; but the highest-sfik Kif gain their status, in part, by being able to correct for those biases to some degree.
They would actually make reasonable rationalists, if it wasn’t for their racial tendency to defect in Prisoners’ Dilemmas and their complete lack of human morality. They’d certainly love the idea of rationality, because it leads to more sfik.
As far as success goes; they are not the biggest economic power in the series (that’s the non-violent Stsho); they are not the biggest political power in the series (that’s probably the Mahendo’sat), they don’t have the best technology (that’s the enigmatic Knnn), they may be the biggest military power in the series (or they may be second to the Knnn; the Mahendo’sat also have a comparable fleet), but not by much of a margin. (Especially since they’re having a civil war at the time; two different Kif are contending for the position of species-wide leader).
They’re a threat directly to the protagonist, and directly to the protagonists’ home planet (which is most certainly not a major military power); and their civil war threatens to start dragging in other species and getting really messy.
I didn’t say they were irrational, I said their goals (kill or frustrate high sfik individuals) were harder to co-operate on then our goals (maximise everyone’s happiness, say.) If there were only two humans left, we would work to rebuild, but they would kill each other, or at least work to thwart each other’s goals.
EDIT: Incidentally, if it’s possible to have negative sfik, does that mean you can gain sfik by helping the disadvantaged?
Succeeding in any aim gains sfik, as long as the aim is not trivial. Subjugating a group of other Kif to build a spaceship is a suitable aim; especially as you can then use that ship afterwards to accomplish more aims (and you’ve already got a subjugated crew).
But yes, two Kif, alone, are not a stable equilibrium. (Among other things, they only eat living food, and not plants. Two Kif with no other live animals around are the only source of food for each other—they have no qualms about cannibalism). Depending on the individuals, however, they may well decide to cooperate temporarily, and rebuild, in order to increase their long-term odds of survival. (They don’t care about their legacy, however. Being killed is The End. It’s a bit of a mystery why they would ever try to have children). Long-term survival beats a temporary sfik gain.
Not directly—that’s a trivial aim. But it’s perfectly within the Kif character to offer the disadvantaged a deal, something along the lines of “Join my followers, and I will help you now in order to obtain your labour later; betray me later, and I will kill you” and then use the extra footsoldier to gain sfik in some way. (If the disadvantaged was disadvantaged by a common enemy, they might simply turn up, help the poor fellow back on his feet, let him heal a bit, then give him a gun and point him at the guy who originally hurt him).
I’m not sure that it’s possible to have negative sfik without dying. It’s certainly possible to have zero.
Good point. I meant something more along the lines of a small post-apocalyptic group, but you take my point; rebuilding is useless if it requires, say, your death, or would prevent you from frustrating other Kif. They can co-operate, but they don’t want to; it’s unnatural and inherently bad—like a human killing other humans.
So if I take something corpse wanted to keep (grave goods?) I achieve positive infinity sfik? Or do they have to be actively resisting? Could they set up traps?
The usual Kifish model of society is one powerful leader forcing everyone else to do as he says (in a post-apocalyptic situation, if only one Kif has got hold of a gun, then he will be the leader until such time as someone takes it from him). It’s more subjugation than cooperation (and there’s always a certain amount of competition to be the leader) but it does result in everyone moving in the same direction, at least when the leader can see.
What could a corpse possibly want to keep? Besides which, yes, they have to be actively resisting. They could set up traps, but it’s hard to see any reason why a Kif would bother; he doesn’t care what happens after he’s dead. I guess a Kif could go after the grave goods of another species (non-Kif individuals can gain sfik—specieism is one of the evils that the Kif do not usually practice) but they’d only gain the (finite) sfik for defeating the traps, not the infinite sfik for defeating the dead person.
Kif will often threaten to eat the hearts of their enemies. This could be considered as taking something (the heart) from a corpse. However, one gains only a finite amount of sfik for doing so (some for killing the enemy in a clear and public manner, more for doing so at close range instead of employing a sniper (note that all Kif dress alike, in dark robes, presumably in order to confuse snipers; in the last book, one Kif appears in a silver-trimmed robe, which instantly marks him as having extremely high sfik—high enough to be identifiable from a distance and get away with it)).
Sure, but humans can maintain co-operation on a much larger scale by simply having compatible goals.
That was exactly what I was thinking of; humans are known for our reluctance to part with worldly possesions, even in death.
It sounds like the sfik of the defeated individual is being used merely as an approximation of the difficulty of overcoming them.
I had assumed they would try and hide, generally, from becoming well-known. Why was this one in identifiable clothing? For the challenge? Seems risky.
Probably true. A Kif might claim that they make up the difference by not having irrational personality clashes, and refusing to follow a charismatic leader who is a total idiot (both known failure modes of humanity). Though I don’t think that that would actually make up the entire difference. (It’s probably also worth mentioning that while Kif have jump-drive-capable spaceships, several varieties of weapons, good knowledge of torture and truth serums capable of working on even non-Kif biologies, they do not have doctors. A Kif is expected to heal his own wounds, or die, more or less. This probably means that the truth serums they have were stolen from other species).
Exactly correct.
Because he (Vikkhtimakt) actually did have the sfik to get away with it. Having massively positive sfik, means that he’s massively difficult to kill; wearing the identifiable robe means that he has massively positive sfik (with a side chance that he’s an idiot, but in that case why is he still alive?). Any Kif who’s not a total idiot would hesitate at this point, and find out why he’s so identifiable before pulling the trigger; and it turns out to be well-known among the Kif (and thus easy for a potential assassin to find out) that this particular one is the top lieutenant of someone with even higher sfik (spoilers if I define that more exactly—the question of exactly who Vikkhtimakt is working for is a major plot point in that book), who will be Most Displeased if Vikkhtimakt gets killed. (The prudent Kif will therefore only kill Vikkhtimakt as part of a plot to supplant—and kill—his leader; whose extreme sfik means that a prudent Kif will have second, third, fourth, and fifth thoughts before attempting that challenge, and will certainly give due consideration to killing the leader first, without warning, instead).
And not, say, become the bonded slave of his healer? I guess such submission would be damaging to one’s sfik, but not so much as dying.
Ahh. So the trick is to kill him and not tell anyone.
Once he’s healed, if the healer is not strong enough to keep him subjugated, he’ll simply go away (possibly killing the healer in the process). If the healer is strong enough to keep him subjugated—well, then said healer can probably more easily subjugate healthy Kif to start with.
All Kifish negotiations consider the possibility of either party reneging at any stage in the future (and Kif will renege immediately if they gain some advantage in the process)
What’s the point of that? If you don’t tell anyone, then you don’t get the sfik. A Kif has only as much sfik as he can persuade others that he has.
Explosive collars, or equivalent if they lack heads.
In practice, sure, but I got the impression that they valued actually gaining sfik, not merely pretending to have it (like a human valuing actually saving orphans, even though pretending to accrues benefits as well.) Was I mistaken?
Okay, that’s a good idea. Not mentioned in the book, but I can see a Kif going for it.
They value sfik in more like the way as a human values reputation (which bears a lot of similarity to sfik). If no-one knows about it, it doesn’t count either way (a Kif may try to suppress knowledge of some failure, if this seems possible).
It is also common among Kif to claim to have more sfik than one has; in the process, one may in fact gain the extra sfik (say, by attracting followers with the bluff, and then using them to succeed in some task). Of course, if one bluffs too hard, then one gets a task that one cannot complete; and if one fails, or admits an inability to complete the task, one likely gets killed, so there’s an incentive not to bluff too high. (If one succeeds in the task, then one is assumed to have had sufficient sfik all along; the bluff becomes fact through general agreement).
It’s also worth noting that a Kif does not need to know who pulled the trigger to take revenge. In the absence of that knowledge, Kifish “revenge” might simply mean killing everyone present on the space station or other area in question (so everyone else nearby suddenly has a very strong motive to find out who did it before the Leader gets back, and present said Leader with the assassin’s disembodied head in the hope of turning away any further indiscriminate wrath, and possibly even gaining the Leader’s favour in the process). Whether the Leader would actually do this or not is irrelevant; the assassin’s head will anyhow be removed (if he can be identified), just in case.
Irritatingly, if a Kif does do this and get away with it, then six months later some ambitious underling several solar systems away might claim to be Vikkhtimakt’s killer, and get away with the sfik in any case.
Fair enough, I just assumed it was more of an honour code. I guess there’s no such thing as a low-key Kif that rules from the shadows, then. Chalk one more victory up for the human resistance I postulated earlier.
EDIT: It probably says something about me that just sort of assumed explosive collars were in common use. It might be seen as somehow “cheating”, though; it’s a lot easier to subdue someone with an implant than a stick.
If there was, he’d still claim responsibility for his actions—possibly in the form of a note left at the scene claiming that “The Crimson Shadow did this!”. He doesn’t ever need to associate the name of the Crimson Shadow with his face; he can just as easily deliver orders to his underlings remotely, with voice-distorting telephones. (Of course, this means that he clearly doesn’t have the sfik to show his face; he fears someone stronger, and a fair majority of his followers will defect to the “someone stronger” instead. So, you’re right, it’s not really practical in the end).
Also, an external collar may be removeable; it’s also worth noting that the only way to send a signal faster-than-light in this universe is in a jumpdrive-capable ship (and that has a few other disadvantages, mainly in that it takes some time to dock safely), so if a follower can get to another solar system and knows that his boss won’t be visiting for the next twenty-four hours, then he has twenty-four hours to try to get the thing off.
An internal collar largely evades this problem, as long as the underling isn’t willing to take a bit out of his own neck to escape. (But how much sfik would that sort of dangerous escape be worth?)
That’s true, you can always get far enough away that the boss can’t hurt you, no matter the tech level; radio and such merely extends it (yet another advantage of humanity.) In fact, since presumably the implant would need to be surgically inserted in the first place, so it’s never perfect. Although most fictional explosive collars react adversely to tampering, and once portable brainscans become available treachery is impossible; luckily this destroys narrative in any case.)
EDIT: wait, how does a Kif with a secret identity work?
I realised after typing that up that I’d managed to miss what would probably be the obvious Kifish solution to the problem—kill the person holding the detonator (a quick draw and an explosive bullet to the brain would do just fine). Tamper-proofing on the collar doesn’t matter in that case, as it is not removed; and the collar can’t stop a long-range sniper.
Portable brainscans are not available in the series (and I don’t see the Kif using them in any case. Sure, I can make more copies of me, but each copy would want to kill the original and take over, so it’s kind of risky...)
I’m not entirely sure. They all look fairly similar to each other and usually dress to hide the differences (the protagonist simply cannot tell them apart at all, a significant disadvantage—worse yet, their smell makes her sneeze) so anonymity is simple enough; it would be fairly straightforward for a Kif to claim to be a different Kif, perhaps with the help of some makeup to fool his fellow Kif. (It would be kind of harder to claim to be human).
Well, yeah. It’s a weapon; a highly effective weapon. That depends on you having the target at your mercy at some point.
No, like checking to see if someone’s plotting against you.
That is to say, how does it work for a race who’s chief value is reputation if they have two (or more) separate identities?
At all points. Just because he’s wearing an explosive collar now doesn’t mean he won’t shoot you—it just means that he’ll do so very suddenly.
Oh, right. Yes, I can see that getting a lot of use; just be careful when using the machine (anyone who was thinking about plotting will likely start shooting at about that point).
I do not know. The question did not come up in the series.
This is why you invent the deadman switch.
Indeed. It’s an arms race.
He has to be at your mercy to get the collar on. After that, you have a powerful weapon against him. Not a perfect weapon, but it should be at least as good as your fists, eh?
Not if it’s attached to their head and contains a bomb, set to go off if they betray you!
But that kind of thing sort of destroys narrative tension, so it’s not going to happen.
If I had to guess, I would say that the highest-value ID is the “real” one, and the other is merely a cover to throw off suspicion. Otherwise the whole “dressing all alike” thing could cause problems.
Very true. Even a little better than a gun, because it’s harder to miss and you don’t have to bother to aim. (Just don’t use the wrong detonator, that would be embarrassing).
That seems reasonable. One would expect all successes to be claimed by the “real” identity, and all failures to be shunted to the “false” identity; though this may result (if handled poorly) in people asking why Real Identity hasn’t yet had Fake Identity killed as an example to the others?
How do you identify morality without simply comparing it to your own intuitions?
I’m using a kind fo functional role analysis: the role of morality is to regulate the behaviour of each individual to account for the preferences of others. That isn’t an intuition in the sense of “men kissing - yeuch!”
The intuition of “men kissing—yeuch!” is superseded by other intuitions. There’s a whole sequence on metaethics, you know. And you haven’t answered my question.
Says who? In my theory? In EY’s theory? Why should I care?
I know. I don’t find it very persuasive or cogent.
Yes I have. I identify morality by performing a functional role analysis and seeing whether candidates fit the funtional role.
You have first-order moral intuitions, yes, and you have intuitions about how to resolve contradictions between these intuitions. Yes? That’s how everyone acquires knowledge of morality. do you have some other method of acquiring such knowledge?
I only have “intuitions about how to resolve contradictions” inasmuch as rationality in general has an intutive basis. If there is a problem of comparing intuitions against intutions in (meta)ethics, there is a similar problem in rationality.
Could you describe a moral belief you changed in the past? For the purposes of an example.
It’s an intuition in the sense of “killing=bad”.
What I said was not an intution in the sense of killing=bad, because
a) it’s not an intiution. It’s a functional role anaysis
b) It’s metaethical (what kind of thing morality is) not first-order ethical (what is wrong)
Indeed. I had already noticed I was talking nonsense and retracted the comment by the time I recieved this. Sorry. I have now given an actual, non-stupid reply here.
Why not?
So ‘morality’=‘caring about other people’s preferences’?
Caring about other people’s preferences is a necessary but insufficient part of morality.
So Hedonistic Egoism, for example, doesn’t count as morality?
It may be advertised as such, but I don’t have to buy that.
He has attempted to address this issue in the Meta-Ethics sequence, although I find his points on this specific matter very confusing and I was very disappointing with it compared to the other sequences.
This is a very good point. If we agree cognitive biases make our understanding of the world flawed, why should we assume that our moral intuitions aren’t equally flawed? That assumption makes sense only if you actually equate morality with our moral intuitions. This isn’t what I mean by the word “moral” at all—and as a matter of historical fact many behaviors I consider completely reprehensible were at one time or another widely considered to be perfectly acceptable.
Your examples of bad philosophy … your reasons why they are bad … aargh! Apparently it’s bad to (1) reason about psychology (2) use the ideas of ancient philosophers (3) argue about definitions (4) mention religion at all. (I’m just guessing that this is the problem with the last item in the list.)
So far as I can see, the only problem you should have with papers 1 and 3 is that they’re not sexy enough to hold your interest. They’re not bursting at the seams with citations of experimental psychology or computational epistemology. Really, you shouldn’t dismiss paper 2 as you do either, but I concede that seeing value in the psychological reflections of antiquity would require unusual broadmindedness. (Paper 4 is just oddball and I won’t try to defend it as a representative of an important and unjustly maligned class of philosophical research.)
Concerning your curriculum for philosophy students, well, such zeal as yours is the basis for the renewal of a subject, but in the end I still think something like Plato and Kant would be a better foundation than Pearl and Kahneman. Causal diagrams and behavioral economics do not touch the why of causation or the how of conscious knowledge. If they were not complemented by something that promoted an awareness of the issues that these formalisms inherently do not answer, then philosophically they would define just another dogma parading itself as truth.
Please help me compare: what useful things does Plato say about the why of causation, and why should I believe him? How can I use Plato’s knowledge about causality to achieve things in the real world (except for impressing people by quoting him)?
Aristotle is a more straightforward example. If you made an effort to understand Aristotle’s four types of causes and ten categories of being—if you critically tried out that worldview for a while, tried to understand your own knowledge and experience in those terms, identified where it works and where it doesn’t, the logic of the part that works and the problem with the part that doesn’t—it would undoubtedly be instructive. Aristotle is such a systematic thinker, you might even fall in love with his system and become a neo-Aristotelian, bringing it up to date and evangelizing its relevance for today’s world.
This seems to be more indicative that if one thinks hard enough about any world view it will seem to be useful and make sense. This is essentially as much of an argument to take Aristotle seriously as C. S. Lewis’s claim that “I believe in Christianity as I believe that the sun has risen: not only because I see it, but because by it I see everything else.” is an argument to take Christianity seriously.
This doesn’t answer the question or even the type of question as phrased by Viliam. The claim isn’t that you can use a systematic approach to make your own thoughts ordered in some fashion, but how to make the claims pay rent.
Beating moistened clay against cold iron has a similar effect. On what basis do you claim Aristotle’s memeload is preferable, beyond it’s ability to make impressions?
Aristotle’s categories and causes are all very familiar concepts, so familiar that people don’t reflect on them. These “memes” are already there, they’re just not organised and criticised. It’s like physics. You can go through life without ever sorting out your ideas about force, energy, momentum… or you can take a few steps on the road which leads, if you continue along it, to arcana like the mass of the Higgs boson. Similarly, you can go through life without wondering what it means to “have a property” or to “be a cause”, or, you can take up metaphysics. Aristotle is to metaphysics what Newton is to physics, one of the early landmark thinkers whose subsequent imprint is ubiquitous.
I myself am willing to go out on a limb and say paper 4 is possibly worth thinking about and not blatant trolling. I presume lukeprog wouldn’t have a problem with a paper proposing an fMRI comparison study of atheist/theist Bach listeners. But one would first have to justify such an expense, no? Or at least formulate an hypothesis:
I hope lukeprog would not give a paper credence just because it did sciencey stuff and maths. There is, after all, the famous dead fish study which, as it happens, used fMRI. We have already learned that there is a lot of junk science in medicine and in nutrition. So also in neuroscience.
Luke, how does the Dolan & Sharot book measure up by the standards of science as it should be done?
I was not suggesting anything of the sort. Azari’s work on religious experience is not junk science, as far as I’m aware.
Seconded.
They’re a necessary foundation, because you can’t understand Kripke without understanding Kant (etc). That has nothing to do with reverence.
Presumably, then, you would study Kant in the early stages of whatever course you are devoting to Kripke’s work. Other than his work in Political Philosophy (I’m well aware he’s a prerequisite for that,) what other foundational purpose does studying Kant serve?
I think you’d have an easier time justifying the thesis ‘Kant was wrong about everything’ than ‘Kant was not super-super-crazy-influential.’ Consider:
Kant ⇒ Schopenhauer ⇒ Nietzsche ⇒ all the postmodernists and relativists
Kant ⇒ Schopenhauer ⇒ Wittgenstein ⇒ most of the positivists
Kant ⇒ Schopenhauer ⇒ Nietzsche ⇒ Freud
Kant ⇒ Fichte ⇒ Hegel ⇒ Marx
Kant ⇒ von Mises ⇒ the less fun libertarians
My conclusion, by Six-Degrees-of-Hitler/Stalin/RonPaul ratiocination, is that Kant is directly and personally responsible for every atrocity of the 20th century.
You seem to be in company with Ayn Rand there
Even a broken Ayn Rand is right twice a day.
Twice a day may be a bit too often. Let’s settle on some lower rate, shall we?
I was just mulling over that Peter may have been right in this conversation, and then this beauty of a comment drops. You should put this on a poster or a t-shirt, or something! :)
This was quite possibly the best interwebs post I’ve seen in a long time … if you don’t start making these t-shirts, I will!
I didn’t say Kant was only relevant to Kripke. He was hugely influential.
Re-reading my post, it wasn’t clear that I was asking you for other examples, so I apologize for that. Would you mind giving other examples of relevant ideas for which a prior knowledge of Kant is absolutely necessary?
Eg. the whole of German Idealism. Believe it or not, philosophy educators have a reasonably good idea of what they are doing.
Having dropped a double major in philosophy, I’m inclined to take the side of “not.”
Having read a lot of bad attempted philosophy by scientists, I’m inclined to think phil. doens’t need replacement by, or oversight from, science
But most of the really brilliant philosophers have come from a scientific background! For example, I don’t think 20th-century philosophy would have accomplished nearly as much without Wittgenstein. And Aristotle wouldn’t have gotten anywhere if he hadn’t spent all those years cataloging plants and animals.
Is a fairly self contained subject. You could go through a degree or two without ever touching upon it unless you had to study Hegel for unrelated reasons. So, I don’t see any reason he wouldn’t be taught during the course or in a course of his own which is a prerequisite for the GI course, rather than in Phil 101.
Some do, some don’t, generalizing is fun.
Below this line is the part I cut from the original article.
Below are some quotes from the abstracts of recent papers appearing in the top 5 philosophy journals, along with my reactions.
Abstract #1:
What are you doing? We have experimental psychology now.
Abstract #2:
This article examines Aristotle’s model of deliberation as inquiry (zêtêsis), arguing that Aristotle does not treat the presumption of open alternatives as a precondition for rational deliberation… (Deliberation as Inquiry)
Please move to the history department. Philosophy is supposed to be an inquiry into how reality works, not a collection of musings about the possible meaning of ancient, ignorant writings.
Abstract #3:
According to ‘orthodox’ epistemology, it has recently been said, whether or not a true belief amounts to knowledge depends exclusively on truth-related factors: for example, on whether the true belief was formed in a reliable way, or was supported by good evidence, and so on… In the first part of this paper I try to clarify the intellectualist thesis and to distinguish what I take to be its two main strains… (On Intellectualism in Epistemology)
Another paper arguing about the definition of “knowledge”? No thanks.
Abstract #4:
...many who do not believe in God nevertheless regard certain pieces of religious music, such as Bach’s B minor Mass, as among the greatest works of art. The worry is that there must be something compromised or incomplete in the atheist’s experience of such works. Taken together, these thoughts would seem to point to the sceptical conclusion that the high regard in which many atheists hold works such as the B minor Mass must itself be compromised… (Religious Music for Godless Ears)
Okay, now you’re just trolling.
This post has a number of useful insights, but I’m not so sure about this:
As someone who is currently studying philosophy at the undergraduate level—and thus has first-hand knowledge of what it is like to start with Plato and Kant—I don’t quite see where you’re getting the claim that starting with ancient philosophers either (1) in fact teaches students to revere them/their methods, or (2) is at least meant to teach students to revere them/their methods. My own experience, what I’ve heard from fellow students, and the academic papers that we are actually assigned to read all run counter to your claim.
First, one of the primary, if not the main, purposes in starting with ancient philosophers is precisely to discuss how and where they went wrong. The professor does not just tell us whether a certain philosopher is right/wrong, but has the students critically evaluate that philosopher’s claims both in papers and in discussions. Second, there are numerous academic articles written on their claims (just by virtue of the fact that they are ancient philosophers), which in turn means that those articles—and their arguments—combined with the students’ own analyses provide a substantial foundation for ‘critical thinking.’ Third, regardless of the ancient philosophers’ specific claims, the manner in which they argue for their conclusions and critically think themselves is tremendously helpful—both as a model to emulate and to not emulate—for students just starting to learn what constitutes a good argument. A charitable reading, which in particular recognizes the historical context, will show that many of the ancient philosophers do make good arguments, and value precision and rigor in so arguing; of course, many specific empirical claims are wrong, but insofar as those depend on context and not on poor argumentation they are irrelevant.
I do think that there is much wrong with philosophy, but that specific claim you made is a little shaky (and underspecified).
Isn’t it history of philosophy, rather than philosophy? Learning why Aristotle’s ideas on physics are wrong (e.g. “all bodies move toward their natural place”) belong mostly in a History of Science course, not in a Physics course. Shouldn’t it work the same for philosophy?
If I recall correctly, introductory college physics (as I took it almost 20 years ago!) didn’t teach how to discover physical truths, so much as which ones have been discovered. One might do a few experiments to verify that thrown objects approximate a parabolic path, but one will spend much more time and effort doing word problems applying known formulae from Newton, Boyle, Kirchhoff, etc.
Hmm, this is a good question. After spending some time thinking about this, I think the problem I have in trying to separate “history of philosophy” from “philosophy” is that such an enterprise almost appears antithetical to the goal(s) of philosophy. Philosophy seems meant not to be useful or practical, but intended to ask the right sorts of questions, think about things one abstraction deeper/more meta, and question things others don’t question. As such, studying the history of philosophy is philosophy—and vice versa—insofar as the goal of philosophy is not to positively answer* the right questions but to think philosophically and ask those questions in the first place. So, learning why Aristotle’s ideas on physics are wrong is simply not the sort of thing with which philosophy would concern itself—for better or for worse.
*Thinking about it some more, I just realized that I may be conceiving of the goal(s) of philosophy as something different than what most of the posters here do. I get the sense that lukeprog (and others here) wants philosophy to provide answers to the deep questions, or at least attempt to do so. The problem is philosophy is not about that; maybe it should be, but then I’d argue that such a field is precisely what science is, with philosophy as almost a check/balance (making sure that the right questions are still being asked, assumptions questioned, etc.).
How is asking “the right sorts of questions” not “useful or practical”? To “question things others don’t question” is what scientists do. Examples: Why do things fall down when let go? (physics) Why do children tend to look like their parents? (genetics) Why does a candle burn? (chemistry)
What are the questions “others take for granted” that philosophy asks? Wikipedia:
Most of these are logic, psychology, neuroscience, cognitive science and linguistics, and most recently AI research (esp. knowledge acquisition and reasoning). What’s left is “reality” and “existence”. Have I missed anything?
Upvoted. I do largely agree with you, and the things that I don’t quite agree with you about are things about which I don’t think I can form a persuasive argument.
I’m pretty sure many philosophers would disagree.
I fully concede that; that was more what I think it should be about. And if that’s true and philosophers really do want to answer those deep questions, philosophy needs to be reformed to incorporate more modern science—something like what lukeprog proposed.
So I have a couple of problems with this post.
Firstly, I think that luke simply has a very different idea of what philosophy ought to be doing compared to most philosophers. For example, most philosophers think that doing a fair amount of what is (more or less explicitly) History of Philosophy is a) of independent interest b) useful for training new philosophers and c) potentially fruitful.
I’m not terribly convinced by a), I have some sympathy with b) (many classic philosophers are surprisingly convincing and it’s worth taking the time to figure out why they’re wrong), and I strongly disagree with c) (if they had good insights, there should be better presentations of them by now!). I think the disagreement about a) is the most important, however, as it indicates a simple difference in what people are trying to do with philosophy.
On that ground it just seems childish of luke to criticise Article #2 on the grounds that it’s really history: of course it is, that’s part of what philosophy departments do. So luke wants to change the way philosophy tends to be done, fine, but it’s churlish to assume that that’s the way things already are and that the current practitioners are just bad at it.
Secondly, I think I disagree with luke about what a lot of philosophy is trying to do. Luke finds a lot of so-called “linguistic” philosophy frustrating because he doesn’t feel it solves problems that are “out there”. I’d say that it’s not trying to. The clearest way I can think of to put it is like this: philosophy is often trying to solve the problems that ordinary people come up against when they use words. In that situation it’s highly relevant to find out, say, what they mean by the word “knowledge”, as otherwise your answers will have no relevance to the epistemic concepts they actually use.
Philosophers aren’t trying to build an AI, so they’re not usually so interested in the ideal epistemology. They’re interested in what humans are doing. And that involves a lot of probing the language that humans use. In particular, the much-maligned thought-experiments and “intuitions” are actually perfectly respectable data about what the author, as a competent language-user thinks about the words in question (which is what the author in article #3 is presumably trying to do in a specialised way). I think it’s a confusion to think that thought-experiments are meant to tell us about the deep structure of the world! (admittedly, this is a mistake that is made by some philosophers!)
Basically, luke wants to do something completely different to most philosophers, and so is confused that they don’t seem to be doing what he wants them to do.
Couple of other things:
For the record, I think that plenty of philosophers write lots of bullshit, but then so does everyone else. Philosophy is hard, people go astray.
Article #4… it’s discussing some of the potential implications of atheism with regards to people’s responses to various artworks. What’s so problematic?
What would an ideal epistemology be? I’m not asking for the ideal epistemology itself, but just how could you tell whether you’d developed one? Or if you were at least getting closer to it?
It kind of depends what you mean by “epistemology”. I was cheating a bit when I said that: many philosophers seem to think that epistemology is simply about studying the concept of knowledge as used by human beings. However, you might also think that perhaps what we’re really interested in is how to get useful information about the world.
In that case the human concept of “knowledge” seems pretty shitty: it’s binary, and has a whole host of subtle complications of usage. Whereas something like a Bayesian approach seems much better.
So I’m claiming that philosophers aren’t necessarily interested in the latter kind of epistemology; they’re interested in “knowledge” as most humans use it, rather than whatever epistemic concepts you would build into a new agent!
‘Ideal’ is underdetermined here, but we could give it content. I can imagine four basic families of ways to evaluate an epistemology (in addition to combinations):
Territorial: How useful is the epistemology for causing agents to consistently assert truths and deny falsehoods?
Epistemically Rational: How useful is the epistemology for causing agents to believe things in proportion to the strength of the available evidence? This may be a special case of the territorial evaluation, defined so as to exclude gerrymandered epistemologies that only help their agents by coincidence.
Instrumentally Rational: How useful is the epistemology for causing agents employing it to attain their personal goals?
Moral: How useful is the epistemology for satisfying everyone’s preferences, including the preferences of people who may not subscribe to the epistemology themselves?
This is a good question for Eliezer Yudkowsky, since he seems to think Objective Bayesianism is it.
That’s a terribly inadequate reason to be uninterested in the ideal epistemology. Luckily many philosophers do seem to be quite interested in it; still, like Luke, I wish there were more.
I think (b) can be quite useful, for the reason you described. IMO it’s useful in physics, as well, because it lets the student reproduce (or at least read about) the experiments that led to our current understanding of the world. For example, are subatomic particles evenly distributed throughout a piece of metal (and, indeed, all matter) ? It’s easy enough to answer “no”, but it’s much more important to discover how the answer was found. Even though this answer itself was pretty far from the truth.
I am curious about the qualifier “pre-1980.” Do you think later work in these disciplines is noticeably better?
“pre-1980” = “pre-lukeprog”, and thus, the ancient days
(kidding)
If I correctly identified him from his karma score in the survey results (and everything else I saw was consistent with what I already knew about him), he’s younger than that.
How much of the difference is rounding?
He appears to have been born in 1985, so if I was him I would have been torn between rounding down to 1980 and rounding up to 1990 and ended up not rounding at all. (1985 does sound Schelling-y enough to me.) But round-to-even is a thing.
Plenty of fields (like cognitive science, linguistics, mathematical causality) don’t seem to have had many or most of their seminal works published until after the 1980′s; also the 1980′s marked a huge increase in the availability of computers and networking, which is a huge boon to scientific research.
These are just guesses from the top of my head and glances at wikipedia, but also having born in the 80s I’m probably biased.
So much for Grimm, Saussure and Chomsky.
I guess that should be ”...had many seminal...”
“The 1980s” is a somewhat arbitrary line, but looking at the history of linguistics on wikipedia, lots of big changes happened in the 1960s and 1970s, and many important subfields of linguistics have steadily gained ground “From roughly 1980 onwards.”
If someone had interests relevant to mathematics, and they only studied math from before 1900, they would be missing a great number of seminal works, and have very little knowledge of modern mathematics, even if there were tons of amazing and influential mathematicians before that point.
But your original claim was not “study the new stuff, it’s better”, you claim was that there were no advances before the new stuff.
That was not what I intended my original claim to be, and I think the spirit of lukeprog’s post was centered on the claim that one should “study the new stuff, it’s better.”
If I didn’t communicate that that was my intention clearly, I’m sorry, I hope we’re on the same page now.
There is some interesting discussion at Hacker News about this article.
I was going to say something about the ease of which I can come up with obviously confused or unimportant “science” abstracts, but then I realized I was missing the point. Philosophy can be improved (and probably more easily than science) and your proposed introductory sequence is actually pretty good.
Thanks! My post on how to improve science (in a few ways, at least) is How to Fix Science.
Forgive me if someone else has made this rather obvious remark (too many comments to wade through), but isn’t it a weird irony to rely on the big generalization that “[h]itherto the people attracted to philosophy have been mostly those who loved the big generalizations, which were all wrong?”
You give the impression of someone who has not begun to understand basic, perennial philosophical problems. To illustrate, consider the following questions that are dealt with explicitly (and incredibly well) by philosophers throughout the tradition, but not derivable at all from any scientific discovery:
What is the nature of the ontological difference between being qua being and particular beings? Perhaps you think this is a pseudo-problem, yet we all think we can meaningfully say that different things are in (maybe) the same way and the same respect. In what sense is it legitimate to do this?
What is the best metaphor we can use to describe what is going on when we say something is “true”?
What is the Good?
The presuppositions that underlie this blog post are questionable for many reasons, but they are especially so because you go out of your way to ridicule the only mode of inquiry that is capable of calling them into question: namely, philosophical inquiry.
Yes, you caught the irony. Of course, not all ironic statements are false. There are in fact true generalizations about overgeneralization.
There are indeed questions to which philosophers have given insightful answers. But I feel embarrassed on behalf of philosophers to see such pseudo-questions paraded as their proudest accomplishments. Being is univocal because quantification is univocal; we don’t mean different things by ‘are’ or ‘two’ or ‘all’ in different contexts. The best metaphor for truth will depend on our goals. ‘The Good’ and ‘Being’ are ambiguous terms, so the question as to their intended sense will need to be clarified before it can be fruitfully pursued. See Peter van Inwagen’s Being, Existence, and Ontological Commitment.
(Since I’m citing a philosopher, you know I agree with you to some extent. I just don’t like treating Philosophy as a tribe to be defended. Especially not Bad Philosophy. If philosophy is anything worth preserving, it’s just a toolbox.)
Luke, this is my first comment on LessWrong so forgive me if I’m missing some of the zeitgeist. But I was wondering if you could elaborate on a couple points:
You recommend replacing ethics with moral psychology and decision theory. Hearing that, I’m concerned that replacing ethics with moral psychology would be falling for a naive is/ought fallacy: just because most people’s psychological makeup makes them consider morality in a certain way does not make those moral intuitions correct. And replacing ethics with decision theory would be sidestepping the metaethical question about the legitimacy of consequentialism.
You’ve also left out any political theory from your syllabus. That is disappointing, since one of the roles that philosophy plays when performing at its best is uniting epistemology, ethics, and political philosophy. Plato and Kant, for example, were attempting to do that. How do you see your curriculum weighing in on questions like “What is justice?”
As for my background, I studied cognitive science as an undergraduate with a focus on complexity theory and artificial intelligence, but have also spent a lot of time reading and discussing other philosophy. While I think I understand the thrust of your argument (it’s one I would have made myself when I was an undergrad), I’ve been since convinced of the value of other schools of thought.
I’d argue that, say, continental philosophers are not as sloppy as computer scientists or analytically trained philosophers accuse them of. Rather they have a specialized vocabulary (just like other specialists) for some very difficult but powerful concepts. Often these concepts pertain to social and political life. These concepts aren’t easily reducible to a naturalized cognitivist wordview because they deal with transpersonal phenomena. That doesn’t mean they lack utility though.
I don’t think Luke would disagree with this statement. The point of learning moral psychology, as I understand it, is not to adopt moral psychology as moral philosophy; it’s to understand where moral intuitions come from. Luke doesn’t want philosophers studying intuitionist moral philosophy, as I understand it, because it doesn’t provide an accurate account of how people actually make moral decisions in practice.
My understanding is that there is a standing agreement on LW not to discuss politics; see the Politics is the Mind-Killer sequence.
Can you elaborate on what you mean by this? (I am not sure exactly what you mean by “a naturalized cognitivist worldview” or by “transpersonal phenomena.”)
Thank you for this reply.
Thank you for the clarification.
Where I still take issue is that even if we know, generally speaking, “how people actually make moral decisions in practice”, or “where moral intuitions come from,” that does not add up to what a philosophical study of ethics is supposed to give us, which is more like: what moral decisions people ought to make, or, how people’s moral intuitions ought to be refined (through argumentation, say).
To put it another way, if study of ethics changes the way one makes moral decisions, then the ethically educated would act in an abnormal ethical practice. (If study does not change the way one makes moral decisions, it’s not clear why it would matter at all how people are taught moral philosophy).
That is very interesting to know. But I don’t understand your implication. I thought we were talking about a potential revision of the philosophical curriculum. Are you suggesting that mentioning that political theory is part of philosophy is against the ‘agreement on LW’ and so should not be discussed? Or that Luke has chosen not to bring up this aspect of philosophy so as to avoid bringing up politics?
By ‘naturalized cognitivist worldview’ I mean the worldview that holds all the pertinent phenomena to be ‘natural’ in the sense of being discernable by physical sciences, with an emphasis on those phenomena that are part of cognitive systems. Often this comes with the idea that the most pertinent unit of analysis when studying society is the individual cognitive agent or internal processes therein.
I don’t mean anything specific by ‘transpersonal phenomena,’ but I guess I’m trying to broadly indicate phenomena that are not bounded to an individual’s cognitive apparatus. One such phenomena might be Kant’s own idea of trancendental reason. Another could be Taylor’s concept of the social imaginary).
I don’t think Luke would disagree with this statement either. That’s why the replacement for intuitionist moral philosophy isn’t just moral psychology, it’s “moral psychology, decision theory, and game theory.” You seem to be reading more into Luke’s suggestions than is there.
Probably the latter. It might be worth mentioning that the philosophical interests of many LWers are aimed towards artificial intelligence, and in some scenarios where the kind of philosophizing that LW people do would pay off, political theory seems irrelevant (a singleton, for example), or at least is much less relevant than a theory of ethics precise enough that you can code it into an artificial intelligence.
I don’t see what the latter has to do with the former. As you say, the latter point of view doesn’t seem well-suited to understanding society at large. That has nothing to do with the validity of the former point of view (which I assume is being held in opposition to worldviews that allow epiphenomena).
I don’t see why this isn’t reducible to a naturalized cognitivist worldview. Instead of one mind you study a collection of minds.
I addressed decision theory in my original comment. “And replacing ethics with decision theory would be sidestepping the metaethical question about the legitimacy of consequentialism.”
I think that the replacement would implicitly make a few ethical and metaethical assumptions that are a matter of legitimate debate within academic philosophy.
Ah, I see. While I think I understand Luke’s adherence to LW’s norms and interests, I think it would be very narrow-minded to think that the interests of society as a whole or the academic system in particular share the focus of LW.
As long as Luke is addressing what he sees as problems with the curriculum of philosophy departments (which is itself a rather political issue, really), wouldn’t it be irrational to ignore the real context in which philosophy occurs (a sociopolitical one)?
I agree. I was just indicating a common association.
The validity of the former point of view tends to be challenged by those with either phenomenological or social constructivist orientations. (I am not sure whether these positions ‘allow epiphenomena’ or not; I expect that when taken to their logical conclusion, they don’t, but that they are coreducible with the cognitivist naturalist view)
I fundamentally agree. However, though I think that these can in principle be reduced to a naturalized cognitivist view (or, we could say, ontology), that doesn’t mean that this can be done easily, or that that reduction will necessarily get us farther or faster than a different level of analysis.
Because of the difficulty of that reduction and the possible intractability of the social theories in their reduced form, it makes sense to continue inquiry on a social or political level. This level of analysis often evokes philosophical concepts that are not in Luke’s curriculum.
I guess something like this.
Do you know the author of that page? If you do, could you try convincing them to include more examples to constrain the interpretation of their abstractions? They seem to have interesting ideas, but my understanding of them currently depends heavily on charitable-interpretation-giving rather than actual confidence that the author has a correct and well-calibrated ideas in mind...
I don’t. I saw that page linked to from here.
Anyway, I’ve seen certain LWers hypothesize/claim/point out that in our culture it is taboo to talk about certain intersubjective truths too explicitly. See, for example, this and the comment thread to it.
Working in philosophy, I see some move toward this, but it is slow and scattered. The problem is probably partially historical: philosophy PhDs trained in older methods train their students, who become philosophy PhDs trained in their professor’s methods+anything that they could weasel into the system which they thought important. (which may not always be good modifications, of course)
It probably doesn’t help that your average philosophy grad student starts off by TAing a bunch of courses with a professor who sets up the lecture and the material and the grading standards. Or that a young professor needs to clear classes in an academic structure. It definitely doesn’t help that philosophy has a huge bias toward historical works, as you point out.
None of these are excuses, of course. Just factors that slow down innovation in teaching philosophy. (which, of course, slows down the production of better philosophical works)
This made me chuckle. Truth is often funny.
Downvoted for sloganeering and applause-lighting.
That’s quite an applause-lighting slogan you have there.
ω+1
Yep. I downvoted it for that reason.
Could you be more specific?
Yes, good idea.
The “reactions” to the abstracts of philosophical papers are a clear example of what I mean. To me, these alternating sections of carefully worded academic abstracts, followed by a few words of sarcastic barb, feel too much like a solid dig at the other side instead of a thoughtful argument.
Another example of “yay-science”-ing: The post mentions with approval a suggestion to defund all university philosophy programs that don’t lead to scientific advances. Of course, if philosophy were only useful for its impact on science and engineering, then that might be a good idea. But that premise is not obviously true. However, the post appears to accept it uncritically.
The opening quotation is flippant and hyperbolic, and is neither qualified nor argued for in the rest of the post.
The proposed curriculum reform is a smorgasbord of LW interests (yay LW!). Yet the post does not argue for the curriculum. Instead, it asserts that curricula need more X and less Y, where X sounds scientific and Y sounds prehistoric. This is what I’d call sloganeering.
Wording: Also in the curriculum bit, the post states that universities teach students to “revere” failed methods. Perhaps true, but unsubstantiated here. Also, I think the word “revere” is a boo-button for rationalists—we know we’re no supposed to revere things, especially not old thinkers, so hearing that someone is revered presses a button and we say “Boo to old thinkers! Hooray for scientific progress!” (OK, that one might be just me.)
I think any of these would have been OK had the rest of the post been exceptionally meaty, but this one was not.
The ‘thoughtful argument’ parts are often hosted in other posts. I generally try not to write 20-page posts, but to break things into pieces. E.g. my reaction to abstract #3 is backed up here and here.
No, it doesn’t.
Right, the purpose of this post isn’t to argue that specific point. What’s your view, here? That an article should argue for every claim it makes? I doubt that’s what you intend, as that would mean that each article actually becomes a book.
Hmmm. Maybe I could give a lot more detail about why I made those specific recommendations in a discussion post or something.
Fair enough, I’ll edit that.
There is no need to write a 20-page post, let alone a book. But that doesn’t mean your only remaining option is barb. Regarding the responses to those philosophical articles, you could have responded briefly yet earnestly.
As for the Russell quotation: No, I do not think an article should argue for every claim it makes. (It would not be a book; it would be a universe.) But the quotation was a dig at those self-important philosophers. That’s why, I thought, it made the post seem applause-lighty.
I guess you’re right that the post doesn’t really approve of Glymour’s suggestion. I mistakenly read your approval into it.
Thanks for keeping the tone of this thread reasonable.
It isn’t remotely clear to me from the abstract that the author is “arguing about the definition” of knowledge at all.
Incidentally, I have noticed in that LWers often are not good at distinguishing between saying something novel about what X is, and changing the definition of X.
Speaking as someone who has read a lot of philosophy...
If I had a boatload of money, I would currently be throwing it at you to make this thing happen.
Actually, is this happening anywhere? Does CMU teach this sort of stuff in their philosophy department? Luke gave as examples the five top American programs, but are there other programs ranked lower which teach philosophy Pearl and Kahneman style?
I seriously doubt it. This is pretty much a “reboot” of Philosophy—a reconception of what it’s about. Anyone who wants to put together a program like this might hesitate to call it Philosophy instead of something else.
I’m not sure. Looking at CMU’s website makes me think that they are leaning in this new direction, which is maybe not reflected in the Intro courses yet, but is certainly present in the lecures they have scheduled, as well as the fact that it offers a Major in Logic & Computation and puts it in the Philosophy department.
The tech schools have had excellent philosophy departments. It’s no accident that Judith Jarvis Thomson taught at MIT.
If I was looking into majoring in philosophy and I was possibly interested in this new-fangled portion of it, you’re saying tech schools are the way to go?
I do not know that specifically.
However, the advice I would give to anyone thinking of majoring in philosophy is: don’t major in philosophy.
That said, I don’t think it matters too much what you major in. The main benefit of a liberal arts degree is the liberal arts part—being exposed to people from many different disciplines with different ways of thinking, being forced to take them seriously for a while, and getting a chance to see the connections between them.
Really, if you want to major in something, you should take the opportunity to learn a skill, or else to take advantage of machinery that you’ll only find in a university. There are things that you can learn in college, like how to mix chemicals in a lab or how to make pottery, that are difficult to learn without the proper facilities. And if you do prefer learning academic subjects in a class, remember that math and computing are good bases for everything.
You can read philosophy on your own time, and if you’re reasonably intelligent then reading it in a class probably won’t help. A philosophy club might be a good idea—those are often as good as seminar classes.
Actually, I agree with all this. Phil is a great hobby. No special equipment is required and you can do it anywhere.
.
Yes, though it would be better-written if I’d instead trained in writing and it would be more useful if I’d been able to link to empirical research demonstrating the effects I just baldly asserted. My training in philosophy did seem to make me better at sounding like I know what I’m talking about when I tell other people what to do.
If you can describe the methods of sounding authoritative, I think it would be very valuable, whether as a contribution to understanding the dark arts or as a tool for increasing motivation. (Or does is it a skill of sounding right which doesn’t actually motivate people?)
I’ve been wondering about the techniques ever since I noticed that Heinlein had a talent for sounding right.
Have you tried listening to the Onion Talks?
I hadn’t heard of them, but I don’t think the one you linked to is a very good example of sounding authoritative.
I profess to have a skill at sounding right, but I did not learn a lot of theory.
.
Thanks, but I can describe at least in general terms how I could have written it better...
I could have arranged the words so that they form pleasing patterns when spoken aloud. The first sentence uses the word “that” which might be confusing to some readers, especially seeing the comment permalink without context—I could have spelled out briefly what the subject was. The rhetorical “However” at the beginning of the second “paragraph” serves a useful stylistic purpose, but will turn off some readers who think it’s improper. Ditto with making “paragraphs” of fewer than two sentences. “Different people with different ways of thinking” places undue emphasis on the repeated word “different”, and it would probably be better to use more specific words in each case. I could have taken into consideration what the purpose of writing that comment was and tailored the rhetoric specifically to that purpose—if I wanted to convince someone not to major in philosophy, there are some extra facts about philosophy I could have dug up to make it hit home harder. A superintelligence might be able to create a basilisk-string that would insert the knowledge directly into your mind.
And those ideas are without significant training or practice as a writer. If I couldn’t do better with training, then there’s significant low-hanging fruit out there for improving writer training.
Here is CMU’s spring 2013 philosophy department course catalog. Unfortunately, the CMU website doesn’t show syllabi for its philosophy classes. Just looking at the classes, they appear to be much more logic- and compsci-heavy than most philosophy departments, but also cover some standard stuff: political philosophy, Kant, etc.
When I saw the title “Religious Music for Godless Ears,” I was sure it would be in some second rate journal, maybe Religious Studies at best. But nope! It’s in freakin’ Mind.
Is this problem limited to philosophy?
Good work in virtually every discipline requires a semi-decent grounding in math (with the possible exception of menial work)
History? math
Linguistics? math
Medicine? math
Philosophy? math
And why not: Art? math
Indeed, the universities teaching such subjects would do well to realize this and make math an integral part of the curriculum in most subjects, as opposed to the tackled-on (or non-existent) math courses they have now.
I agree that there is good work to be done with math in all of those fields. But there’s plenty of good work in most of them that can be done without math too.
Yes. Two caveats:
1) The person doing the good work without math should remember to consult someone with the math skills before publishing their results, if they are trying to say something math-like.
For example, to invent a hypothesis, design an experiment and collect data, the math may be unnecessary. But it becomes necessary at the last step when the experimenter says: “So I did these experiments 10 times: 8 times the results seemed to support my hypothesis, 2 times they did not; therefore… what exactly?”
2) There should be enough people in the given field knowing the math, so when the person from the first example wants to find a colleague with domain knowledge and math skills, they actually find one.
I recently went to a linguistics colloquium because the talk was about extending a model of grammatical choice to decision theory on polynomial rings. Even if it wasn’t very accessible to linguists, one of the speakers was clearly a mathematician and there are people making these connections.
Yes. From the entirely anecdotal evidence I have gathered on the subject, it would seem that such research is more often being done by outsiders from maths fields who decided to study some linguistics (or other non-maths fields), then by linguists who decided to study some mathematics.
I think the way that we specialise from the start is unnecessary. From age 4, kids go to one place (or at least have set times) for maths, another for science, another for art, history, and everything else. Knowledge is so intertwined, it would be better if you could be learning all knowledge together, so they complemented each other. Then someone could notice what vision in machine learning, a certain painting from the seventeenth century, a reflexive verb in French and fluid mechanics have in common, and to write a paper under the title ‘philosophy’ about aesthetics. Or something. It’d just be nice to be able to use knowledge formally the you haven’t had to spend twenty years studying.
It’s not going to happen because it would disqualify too many candidates and make courses unpopular. Maths is a huge turn off for a lot of people.
Also, one could argue that history is releant to just about everything. Etc.
Indeed. This is a bug, not a feature, and alas, it holds these fields back.
It is certainly true that history-of-(field) is useful for people doing work in (field). History in general, while useful in general, is less directly useful for a specific field. And indeed, most fields do spend reasonable (or more than reasonable!) amounts of time discussing their own history. Art does this, philosophy does this, even mathematics and physics do this to some extent.
It is true that the argument “one could argue that (some field) is relevant to just about everything and that therefore more of (this field) should be taught” can be made convincingly for many fields, but the fact that it can be made for many fields is not an argument against it, it just means that some field must be prioritized, hopefully on utilitarian grounds.
I disagree that this is a bug, not a feature. I think it’s useful for fields to contain people with different styles of thinking. The people who are competent at math are probably N types on the MBTI, people who are good at abstract reasoning, but who might be less competent at focusing on empirical data and specific concrete situations. The sciences, especially the social sciences, need people who are good at observing/collecting data, and I would hate to disqualify these people with a math requirement, or relegate them to lower-status because their minds operate in a different (but also useful) way.
(This comment informed by having read this essay earlier this morning.)
I suspect that the negative attitude towards math has less to do with personality type and more to do with the execrable state of mathematics education.
That is an exceedingly optimistic hypothesis.
Might be, indeed. This hasn’t stopped physics, chemistry, engineering, biology, astronomy, etc. all of which have empirical data and concrete situations, and are chock-a-block with maths.
Indeed they need such people. If you have evidence that the present selection procedures prevalent in the social sciences select for such people, I would be delighted to hear it.
Observing and collecting data is stereotypically something that maths types are good at. Consider google, data science and data mining.
Let me refer to Why is machine learning not used in medical diagnosis?
The expert systems in question supposedly outperform human doctors!
I hypothesise as follows : the non-mathy fields maintain a group dynamic that causes a certain hostility towards mathematical ideas, even when such ideas are objectively superior. To an extent, this also prevents objective judgement of people’s abilities within the field, and steers these fields away from a desirable meritocratic state. We end up with fields that select against mathematical ability (those with mathematical ability flee as soon as they realise that the entire history curriculum does not contain a single course on radiometric dating—I wish I were kidding), and that may not select for other desirable qualities instead.
From the essay:
It might be possible to get some information about this from the survey.
The utilitarian/autism-spectrum correlation may be true in the general population, but there doesn’t seem to be any correlation between self-reported AQ and consequentialism endorsement in the LW population (perhaps because the LW population is already self-selected for either being a consequentialist or coming up with good justifications for non-consequentialism):
(A positive correlation suggests that higher autism scorers were a tad more likely to endorse a higher category, that is, deontology or virtue ethics.)
Thank you for checking.
I like this post. Can you think of any pre-20th century philosophers whose works you still hold to be valid/useful today? [or from that list, any pre-21st century...]
Hume turns out to have been right about an awful lot, but still… why read Hume when you can read contemporary works of science and philosophy there are clearer, more precise, and more correct? (If you’re reading Hume for his lovely prose, I suppose that’s a different matter.)
Speaking of Hume, the Nov. 30th episode of Philosophy Bites was kind of amusing. A bunch of philosophers, including famous ones, gave their answers to “Who’s your favorite philosopher?” IIRC, when giving their reasons for liking their favorite philosopher, almost nobody said “because this philosopher turned out to be correct about so much” — except for all the people who picked Hume.
Bostrom simply said: “I’m not sure I have one favorite philosopher. Contemporary philosophy, at least the way I’m doing it, is more like science in that there are many people who have made significant contributions and you’re not so much following in the footsteps of one great individual. [Instead] you’re drawing on the heritage accumulated by many people working for a long time.”
Because Hume drew correct conclusions from very little information (relative to what it took for Science to catch up), and I want to learn how to do that.
Good answer.
Qiaochu_Yuan has a point, but Hume was conspicuously right about so many things that almost everyone around him was wrong about, I think there might indeed be some “Humeness” having an effect going on there. Maybe: unusual good rationality. Or maybe he was a plant from our simulators.
What about Epicurus.
It’s not clear that Hume having drawn correct conclusions from very little information comes from any essential Humeness that you should be trying to emulate. If the set of reasonable-sounding answers to the kinds of questions philosophers like Hume were thinking about is small enough, you’d expect that out of a sufficiently large pool of philosophers some of them would get it mostly right by sheer luck (e.g. Democritus and atoms). You’d need evidence that Hume was doing very well even after adjusting for this before he becomes worth studying.
(I say this knowing almost nothing about Hume—I last took a philosophy course over 8 years ago—and so if it’s obvious that Hume was doing very well even after adjusting for the above then sure, study Hume.)
This seems to be the case.
As I was reading your post, I kept thinking to myself: “Yeah, well this applies to almost everybody except for Hume (some of the time)” so I find myself nodding along to everything you said in this comment.
I’m not Luke, and I’m not even sure this is what he would count as philosophy, but the Stoics were right about an awful lot of practical things to help you live a better life, and research now seems to be indicating that their techniques do in fact work.
Research? Research?
Someone asked me via PM, in reference to this post, “Do you have any specific recommendations on the most useful fields, ideas, or techniques thus far [for solving FAI-related philosophical problems]?” I figure I’d answer here publicly in case anyone else finds my answer helpful. This is a list of topics that I studied over the years that I think contributed most to what philosophical progress I managed to make. (The order given here is not very significant. It’s just roughly the order in which I encountered and started studying these topics.)
evolutionary biology and psychology
computer science (theory of computation, algorithms, data structures, OS, compiler, languages)
math (number theory, probability, statistics)
cryptography
game theory
anthropic reasoning / indexical uncertainty
Tegmark’s Ultimate Ensemble
algorithmic information theory, AIXI
logic, recusion theory
decision theory
philosophy of science, philosophy of math, ethics
Another view of Philosophy, which I believe Russell also subscribed to (but I can’t seem to find a reference for presently) is that philosophy was the ‘mother discipline’. It was generative. You developed your branch of Philosophy until you got your ontology and methodology sorted out, and then you stopped calling what you were doing philosophy. (This has the amusing side-effect of making anything philosophers say wrong by definition—sometimes useful, but always wrong.)
The Natural Sciences, Psychology, Logic, Mathematics, Linguistics—they all got their start this way.
That’s how Philosophy used to work. Nowadays, I think the people who can do that type of “mucking around with complex questions of ontology and methodology” thinking have largely moved on to other disciplines. If we define Philosophy as this messily complex discipline-generating process, it no longer happens in the discipline we call “Philosophy”.[1]
That said—while I would personally enjoy the “intro to philosophy” syllabus Luke proposes, I think it’s a stretch to label the course a philosophy course, much less [The One And True] Intro To Philosophy. It’s cool and a great idea, but the continuity with many models (be they aspirational or descriptive) of Philosophy is fairly tenuous, and without a lot of continuity I think it’d be hard to push into established departments.[2]
If we’re speaking more modestly, that philosophers should be steeped in modern science and logic and that when they’re not, what they do is often worse than useless, I can certainly agree with that.
[1] E.g., Axiology.
[2] Why not call it “introduction to scientific epistemology”?
I’m not sure if this really applies to philosophers in general or just a few that have been commenting here, but I think I’ve found one source of friction. It seems that SI/Luke care about problems/questions that are different enough from what philosophers think their field is about for even good(by its own standards) philosophy to be largely worthless for SI’s purposes, while still being similar enough on the surface for a lot of destructive interference to happen. By destructive interference I mean things like LWers thinking that phil should have relevant answers because it addresses similar sounding questions, third parties thinking that SI type work belongs in phil journals/departments/grant slots even when its not really appropriate to that venue, having irrelevant phil papers pop up due to overloaded keywords ect.
There don’t seem to be anything that shouldn’t be changed, and thus it seems meaningless to keep the label “philosophy”. Hence why I object to saying LW is about philosophy as well. The ONLY similarity is that it tries to resolve questions (previously) though to be the domain of philosophy to solve, but that used to be the case with many other things that are now their own sciences. I say just plain scrap all of philosophy, and move all the supposed tasks of it that are worth keeping over to new fields if they aren’t resolved by existing ones already.
There are a series of statements like this in Luke’s post and in the comments. I don’t understand them. What would it mean to ‘scrap philosophy’? Would someone from like the government have to come along and make it illegal or something? It doesn’t seem like there’s any way to change philosophy, or eradicate it, except by arguing with philosophers and convincing them to do something else. Is that what ‘scrap philosophy’ means?
Presumably s/he means de-funding everything that pretends to be philosophy, but is, in fact, history of thought, and so belongs in the history department.
But the funding from philosophy programs comes from universities. I doubt the government itself spends more than a pitance on philosophy. So do you mean ‘scrap philosophy’ as in, try to convince universities to fire the philosophers under their employ?
I am not suggesting this, just trying to interpret what Armok_GoB may have meant. My view is that the defunding of the old school should happen organically, as it usually does. Newer, more successful approaches and sub-disciplines slowly replace the old as the old guard retires.
Ah, thanks. Is it weird that this has never happened to philosophy as a named discipline? Certainly schools of thought come and go, but why is philosophy as an academic banner by far the longest lived?
“Love of wisdom” is a very broadly-applicable term. Also, it managed to cough up the entire field of pure mathematics once, and arguably the slim chance of something else as good or better being in there somewhere justifies a lot of scattershot work.
No. Historians aren’t trained to evaluate philsophical thought. Ask them the causes of a war, they can tell you, ask them the motivations for Aristotle’s theory of Entelchy, they’ll go “huh?”.
Well, presumably historians do specialize. In the revised world where history of philosophy ended up in the history department, there would be historians specializing in the history of philosophy. For that matter, I’m sure such people exist already.
The real question is which option provides more synergy:
learning about the motivations for Aristotle’s theory of Entelechy, together with a study of the culture of Greece in the 4th century BC (the historical option), or
learning about the motivations for Aristotle’s theory of Entelechy, together with a modern understanding of causality or whatever (the philosophical option).
If I can offer an expert (though probably biased) opinion: 2.
I think it would be something more along the lines of spreading a meme that says “Let’s just ignore philosophy, it’s pretty much a waste of time.”
This is happening already to some degree. It would have to be a heck of a lot more infectious of a meme to actually destroy philosophy as a field, though.
That’s been a meme since 400 BC, and it remains by far the dominant view today among laypeople, scientists, economists, etc. Basically, the only people who think philosophy is worth pursuing are philosophers. If that’s all you mean by ‘scrapping philosophy’ then the job is long since done.
Yeah, this is true. Maybe scrapping philosophy means just not funding it anymore?
Meanwhile, back in reality:
“Philosophy, Politics and Economics (PPE) is a popular interdisciplinary undergraduate/graduate degree which combines study from the three disciplines.”
Alain de Botton’s pop philosphy sells millions, presumably to laypeople.
And philosophers appear with scientists at interdisciplinary conferences
As some quick replies have pointed out, yea, cutting funding and spreading memes about ignoring it and actually ignoring it.
Getting people to ignore philosophy is, as I said do DSimon, largely accomplished already. Ignoring it is as easy as pie. As far as defunding it goes, I’m not sure I see the point. It’s not as if it uses up much of any given university’s budget. I’d be willing to bet that philosophy departments are generally cash positive for a university.
If it wasn’t easy it probably wouldn’t be worth the trouble to suggest.
I’m fairly certain a professor at the University of Chicago told our class that the philosophy department was cash negative.
Really? Which professor?
I believe it was Ted Cohen, who’s the head of the philosophy department. I’m not certain though.
As a curiosity, what would they make money on?
Ted Cohen huh.
They make money by attracting undergraduates, and they have low overhead because in general philosophy departments don’t pay professors very well, and the department itself requires nothing more sophisticated than a few rooms filled with desks.
Oh, you’re including attracting undergrads! I think he was just talking about direct earnings.
When were you at Chicago, if you don’t mind me asking?
Right now. I’m a second year. You?
Huh, what a coincidence. I’m a third year.
Would you be interested in going to a meetup at UChicago? There are regular meetings downtown, but in the past we’ve held a couple here that were well-attended, and I’m thinking of having one here again.
Would you consider turning this knowledge into an actual curriculum that includes practice problems and exams?
I’m thinking of something in the lines of MIT’s free curriculum and Khan Academy’s Math section. I have no problem with still linking to these text books as long as the freely available curriculum made by you or your team fills the gaps and there are plenty of ways to test understanding. I name khan’s math section specifically because it uses that infinite practice problems and 10 in a row signals proficiency and has built in SRS.
Unlike khan however i would want to see mastery of whatever is the current status of the field instead of the low target of a certain school’s exam requirements.
And here’s a philosopher correcting a scientist
“The interconnection of neuroscience and free will has many researchers trying to make bold claims about their findings. In my last post I called Sam Harris’ conclusion that “free will is an illusion” into question. Specifically, I suggested that there were competing interpretations that could be made from the data that neuroscientist Benjamin Libet was using to debunk free will (I mentioned Al Mele’s interpretation as a counterexample to Libet’s). Finally, some neuroscientists seem to have considered Mele’s suggestion (though interestingly I read no reference to Mele) and did some science to test his alternative interpretation. It turns out that Mele was right,and in turn, that Libet was a bit hasty with his conclusion, as was Sam Harris. Click here for the New Scientist article detailing the study. So it seems that the criticisms I levied against Harris might have more sticking power as a result. Seems that Libet has been debunked and not free will. Below you’ll find some central points directly taken from the New Scientist article.”
Much of professional analytic philosophy makes my heart sink too. Reading Kant isn’t fun—even if he gains in translation. But I don’t think we can just write off Kant’s work, let alone the whole of still unscientised modern philosophy. In particular, Kant’s exploration of what he calls “The Transcendental Unity of Apperception” (aka the unity of the self) cuts to the heart of the SIAI project—not least the hypothetical and allegedly imminent creation of unitary, software-based digital mind(s) existing at some level of computational abstraction. No one understands how organic brains manage to solve the binding problem (cf. http://lafollejournee02.com/texts/body_and_health/Neurology/Binding.pdf) - let alone how to program a classical digital computer to do likewise. The solution IMO bears on everything from Moravec’s Paradox (why is a sesame-seed-brained bumble bee more competent in open-field contexts than DARPA’s finest?) to the alleged prospect of mind uploading, to the Hard Problem of consciousness.
Presumably, superintelligence can’t be more stunted in its intellectual capacities than biological humans. Therefore, hypothetical nonbiological AGI will need a capacity to e.g. explore multiple state spaces of consciousness; close Levine’s Explanatory Gap (cf. http://cognet.mit.edu/posters/TUCSON3/Levine.html) map out the “neural correlates of consciousness”; and investigate qualia that natural selection hasn’t recruited for any information-processing purpose at all. Yet classical digital computers are still zombies. No one understands how classical digital computers (or a massively classically parallel connectionist architecture, etc) could be otherwise / or indeed have any insight into their zombiehood. [At this point, some hard-nosed behaviourist normally interjects that biological robots _are_zombies—and qualia are a figment of the diseased philosophical imagination. Curiously, the behaviourist never opts to forgo anaesthesia before surgery. Why not save money and permit his surgeons to use merely muscle relaxants to induce muscular paralysis instead?]
The philosophy of language?Anyone who believes in the possibility of singleton AGI should at least be aware of Wittgenstein’s Anti-Private Language Argument. (cf. http://en.wikipedia.org/wiki/Wittgenstein_on_Rules_and_Private_Language) What is the nature of the linguistic competence, i.e. the capacity for meaning and reference, possessed by a notional singleton superintelligence?
Anyone who has studied Peter Singer—or Gary Francione—may wonder if the idea of distinctively Human-Friendly AGI is even intellectually coherent. (cf. “Aryan-Friendly” AGI or “Cannibal-Friendly” AGI?) Why not an impartial Sentience-Friendly AGI?
Hostility to “philosophical” questions has sometimes had intellectually and ethically catastrophic consequences in the natural sciences. Thus the naive positivism of the Copenhagen school retarded progress in pre-Everett quantum mechanics for over half a century. Everett himself, despairing at the reception of his work, went off to work for the Pentagon designing software targeting cities in thermonuclear war. In countless quasi-classical Everett branches, his software was presumably used in nuclear Armageddon.
And so forth...
Note that I’m not arguing that SIAI / lesswrongers don’t have illuminating responses to all of the points above (and more!), merely that it might be naive to suggest that all of modern philosophy, Kant, and even Plato (cf. the Allegory of the Cave) are simply irrelevant. The price of ignoring philosophy isn’t to transcend it but simply to give bad philosophical assumptions a free pass. History suggests that generation after generation believes they have finally solved all the problems of philosophy; and time and again philosophy buries its gravediggers.
But this time is different? Maybe...
You seem to want philosophers to start being generalists who understand the cutting edge that science and math have to offer. But what kind of contributions do you expect them to make: some examples where a philosopher added critical insight because of his/her generalist background would be nice. Clark Glymour was a good one, but his work seems to have been just math and CS (I maybe wrong). Do you think his background as a generalist made it more likely for him to achieve his insights, compared to say someone with just a math or CS background?
Also, do you expect that the kind of philosophers you are proposing could someday be hired by the private sector?
The private sector already hires plenty of people who have philosophy degree. Philosophy just isn’t a job title.
Thank you for clearly expressing what is wrong with the current state of philosophy as practiced by professional philosophers. It sums up my own vague reservations pretty well. (Yes, I know, confirming evidence bias.) Catchy title, too. The next time I hear someone quoting Plato or Kant, I’ll be tempted to reply “Bzzt! wrong P&K!”.
I see you’re point. Many philosophers still like reading and writing about dead people rather than looking to science for entertainment and answers. However, it is a hasty generalization to infer from this fact that “philosophy is a diseased discipline which spends much of its time debating definitions, ignoring relevant scientific results, and endlessly re-interpreting old dead guys who didn’t know the slightest bit of 20th century science.” And it is a bit myopic to think that your suggestion has not already been addressed by numerous institutions.
Plenty of philosophers study contemporary science and statistics about as much as philosophy. I myself am very interested in understanding philosophical cognition, and I am by no means alone in that interest.
The reason most departments do not teach what you want them to teach is because almost no one in a philosophy department specializes in what you are after, otherwise they would not be (solely) in the philosophy department. So to do what you want, universities would have to offer philosophy degrees that are interdisciplinary...and they already do. CU Boulder, UCSD, and GSU all offer PhDs in philosophy and neuroscience, for example.
This post has me wondering if we should make basic philosophy (and by basic I do not mean “ancient”) compulsory for computer science, engineering, and science majors. Perhaps that would obviate the need for unwarranted commentary like this post.
Dogmas of analytic philosophy, part 1⁄2 and part 2⁄2 by Massimo Pigliucci in his Rationally Speaking blog.
He is quoting a Paul Thagard, who is making a number of the same mistakes as Lukeprog....so....?
“old dead guys” is mind kill, and it sounds immature/impolite.
On the post itself, it’d be awesome if SIAI starts this in-house, something along the lines of semester long CFAR boot camp.
The undergrad majors at Yale University typically follow lukeprog’s suggestion—there will be 20 classes on stuff that is thought to constitute cutting-edge, useful “political science” or “history” or “biology,” and then 1 or 2 classes per major on “history of political science” or “history of history” or “history of biology.” I think that’s a good system. It’s very important not to confuse a catalog of previous mistakes with a recipe for future progress, but for the same reasons that general history is interesting and worthwhile for the general public to know something about, the history of a given discipline is interesting and worthwhile for students of that discipline to look into.
Hume and Nietzsche are both excellent exceptions to your general rule.
Also, #4 seems completely fine to me.
My impressions of what I read from Nietzsche is that it is mostly a collection of sarcastic one-liners.
My impression is that Nietzsche tries to make his philosophical writings an example of his philosophical thought in practice. He likes levity and jokes, so he incorporates them in his work a lot. Nietzsche sort of shifts frames a lot and sometimes disorients you before you get to the meaning of his work. But, there are lots of serious messages within his sarcastic one liners, and also his work comprises a lot more than just sarcastic one liners.
I feel like some sort of comparison to Hofstadter might be apt but I haven’t read enough Hofstadter to do that competently, and I think Nietzsche would probably use these techniques more than Hofstadter so the comparison isn’t great.
Reading Nietzsche is partially an experience, as well as an intellectual exercise. That doesn’t accurately convey what I want to say because intellectual exercises are a subset of experiences and all reading is a kind of experience, but I think that sentence gets the idea across at least.
Then you haven’t read Genealogy of Morals. Those essays have a thesis and supporting argument (with a heavy dose of hyperbole). Genealogy is certainly more comprehensible than Thus Sprach Zarathustra—which might reasonably be described as extended one liners.
Weatherson ask what could be done in other departments. R: all the formal methods(logic, decision and game theory), and empirical like x-phi. Besides, I don’t want shut down philosophy departments, but I will be happy if they move to something like CMU + cogsci.
So I’ve made this sort of argument before in a somewhat more limited form. The analogy I like to give is that we don’t spend multiple semesters in chemistry discussing the classical elements and phlogiston (even though phlogiston did actually give testable predictions(contrary to some commonly made claims on LW). We mention them for a few days and go on. But in this context, while I’d favor less emphasis on the old philosophers, they are still worth reading to a limited extent, because they did phrase many of the basic questions (even if imprecisely) that are still relevant, and are necessary to understand the verbiage of contemporary discourse. Some of them even fit in with ideas that are connected to things that people at LW care about. For example, Kant’s categorical imperative is very close to a decision-theory or game theory approach if one thinks about it as asking “what would happen if everyone made the choice that I do?” Even Pearl is writing in a context that assumes a fair bit of knowlege about classical notions. What is therefore I think needed is not a complete rejection of older philosophers, but a reduction in emphasis.
In my Intro to Moral Philosophy course, Kant’s work was preceded by an introduction to basic game-theory and such, which most people understood much better than his actual work, so I don’t really think his is a necessary foundation or a proper introduction in those fields
Reduction in the sense of cooking down to essentials? Hopefully someone has already gone over the classics with an eye toward identifying prerequisites and formulating adequate substitutes, and it would simply be a matter of adapting such work for our own use.
I meant in the sense of simply reducing the total amount, but yes, that would help a fair bit. I do think that some degree of reduction in the sense you mention has been done(students almost always study small segments of total works in intro classes- No one for example reads all of Critique of Pure Reason in an intro class) , but I’m not completely convinced that it has been that successful or done it completely to the right things.
This is like the opposite of game theory. Assuming that everyone takes the same action as you instead of assuming that everyone does what is in their own best interest.
Yes, at some level one can interpret Kant as saying something like “use decision theory, not game theory.”
Quick Question, a few weeks later: would you be willing to take a guess as to what problems might have caused my comment to be downvoted? I’m stumped.
No idea. I’m perplexed. Your comment seemed to me to be accurate and on point.
Great post.
I wonder is this study-list also good enough for applied psychology? I would like to learn Cognitive Behaviour Therapy and as you point out above most studies are flawed.
If this post post is not enough could you write another one answering my question?
I studied a philosophy module during my undergraduate in the UK. I noticed that the course was structured very carefully so that in the exam there were two ways of succeeding. (1) Having a very good memory of who had said what, and the ability to write something “English-literature-y” about what they had said. Here I noticed a strong taste for worrying about how stylishly things were said. (2) Doing something more on the logic side of things—this felt much more natural and more precise (less wooly) to me. Now, I have been told that in most mainland European coutnires option (2) is less emphasised and its more squarely of an essay-based humanity.
There is nothing wrong with thing (1) existing. I prefer thing (2), and clearly so do you. But I think maybe making the argument “we should drive out (1) to make more space for (2)” is wrong, and a better line is “they should be seperate courses”. (Perhaps with the names “Ancient Thought” and “Modern Philosophy”.)
I am surprised by how much science you bring up. I would assume that scientific advancement will only tangentially change most philosophy. Yes, we no longer think there are only 4 elements (fire, earth etc), but in a modern clasification that would be labelled a scientific theroy (an incorrect one, but still a scientific one).
You recommend Howson & Urbach’s “Scientific Reasoning: The Bayesian Approach.” However, ET Jaynes had some fairly harsh things to say about this book in the References section of PT:LoS:
I’m not sure if this is the kind of thing that I expect Jaynes to be right about though. He would certainly know what modern developments were missing, but I don’t know if he can judge what’s needed in a textbook on philosophy of science.
Are his criticisms here correct? Instead of reading Howson & Urbach, should I be looking for a book that contains what Jaynes says it’s missing, and does not contain what Jaynes says is obselete?
http://blog.talkingphilosophy.com/?p=6424
I’d argue the old dead guys where much less wrong on a great many things where modern popular educated opinion is not informed by logic, probability theory and science. From the ones you cited I gained non-trivial insight by reading Aristotle and Nietzsche (haven’t gotten around to Kant). Feelings of insight on their own probably should be mistrusted, but besides it being an indicator of them being interesting I think they are probably right on many matters.
Not that I’m disagreeing with your proscription as those bits are never steel manned so as to take into account modern discoveries, but worse systematically rendered intellectually impotent by the way they are taught about. Furthermore most of those observations should be studied by branches other than philosophy (though I really like Aristotle’s approach to virtue).
Also can I just ask why you used “guys” instead of using a more gender neutral tag of philosophers or thinkers? I don’t think their sex is at all relevant to their philosophical failings or successes. They where separated by a social and cultural gap that is much larger than the one that usually exists between people of a different sex in a given society. The raw underlying biology probably does cause a systematic effect, but it isn’t anything not shared by the branches of modern philosophy you champion, the other sciences or even LessWrong for that matter.
Are you trying to piggyback on or carrying this meme? Or do you just comply with the convention of using male pronouns and the like as default?
It’s probably just that the fact the mentioned philosophers were indeed male makes it possible to use the diminutive/familiar term “guys”. “Philosophers” or “thinkers” would have looked too respectful. But “old dead guys” expresses the opinion that these were, you know, just these guys who didn’t know much at all by today’s standards.
All is not lost: Carnegie Melon’s philosophy department is teaching its students causality using Pearl :)
http://www.phil.cmu.edu/projects/causality-lab/
This program allows the instructor to draw a hidden causal Bayes net, aka Nature, and students perform experiments on a budget to determine the underlying causal structure:
“In this version of the lab, the Instructor may set a “cost” for collecting data points, and limit the total resources students may spend on collecting data. The lab will keep track of a student’s remaining resources, and inform the student of how much each experiment will cost. In addition, the Instructor can set essay questions for the students to answer, and give feedback to the students. ”
If you want to see what the CMU students are seeing, just download it and check it out; it doesn’t even require installation. Exercises are here.
Luke, I was curious: where does informal logic fit into this? It is the principal method of reasoning tested on the LSAT’s logical reasoning section, and I would say the most practical form of reasoning one can engage in, since most everyday arguments will utilize informal logic in one way or another. Honing it is valuable, and the LSAT percentiles would suggest that not nearly as many people are as good at it as they should be.
In case Singularity University grows over time, maybe one day they will have a philosophy department that teaches it in this way.
.
Philosophy largely isn’t about uninterpreted reality, it is largely about how humans think about and relate to reality. And each other. And thought itself.
.
From whom? Do you know of some people who understand philosophy and can do it better than philosophers, but aren’t philosphers?
I find that imprecise. Did you mean conceptual or numerical precision?
.
It isn’t becuase it isn’t just a vaguer way of addressing the same questions.
coming up with questions for philosphy to answer, teaching advanced question-ansering skills, etc.
.
Yes. Eg: “Is Logical Positivism a good idea?”. Answer: no.
Philosophy has yet to answer what “good” or “idea” even mean with authority, so I’m gonna say no to this, although I don’t disagree with your overall assertion.
I don’t think the fine details of “good” and “idea” are relevant. What’ relevant is that no-one does LP any more, and even its former adherents turned against it.
It’s extremely important to realise what Luke is doing here, even if you agree with it. Cognitive science is a sub-discipline of psychology established to reflect a particular philosophical position. Cognitive neuroscience is a sub-discipline of neuroscience established to reflect a particular philosophical position. In both cases the philosophical position, within that sub-discipline, is assumed rather than defended. What Luke is doing is: (1) denying the legitimacy of other parts of behavioural and neural science, thus misrepresenting the diversity of science; (2) using this to then rule in favour of a particular philosophical position within philosophy; but (3) misrepresenting it as making philosophy reflect modern science. So this is trying to establish a philosophical position as the de facto philosophical position without argument.
Yeah. It’s imposing an ideology.
I love you 8)
I thing the comments of fortyeridania, JonathanLivengood, Peterdjones and others have pretty much nailed matters, but here’s my take:
This post is actually self-undermining. Roughly, it is an argument that a person’s having a background in Pearl and Kahneman will lead to that person’s being able to reason better than if s/he lacked the background, which is in fact (sorry to be blunt, but I think the balance of the comments support this) a quite poor specimen of an argument made by someone who has the background. There’s no evidence that you would have done even worse without the background. So the post is itself some evidence for the falsity of what it claims.
What is the rational value of the abstracts and your one-liners? I understand the point in each case is that the paper is obviously worthless. But this is false: they are indeed obviously not obviously worthless, insofar as they are made by people who are likely almost as smart and well-read as you, and very likely aware of the kinds of criticims you make.
You seem to be conflating questions of philosophical pedagogy with questions of professional practice/methodology.
Concerning the former: can you give an example of a philosophical paper which is mistaken as a result of biases which reading Kahneman as an undergraduate might have prevented, and indicate the mistake? Can you give an example of a philosophical paper which is mistaken as a result of a knowledge gap which reading Pearl might have avoided (written since Pearl published)? Your claim would be strengthened, of course, if the latter example is from the considerable majority of philosophy not specifically about the problem of causation (otherwise you’re getting everyone to read Pearl despite its being apparently relevant only to a small minority). In other words, can you give any empirical evidence at all for your view? (Please don’t say simply that people who understand Kahneman won’t rely on ‘philosophical intuitions’, as that’s plainly false and misrepresents the nature of what dispute there is over intuitions in philosophy)
Concerning the latter: your link is to a paper recommending formal methods in epistemology. Sounds terrific! Does the point extend to other areas of philosophy? As CEO of a philosophy/math/compsci research institute, maybe you’d be willing to set the example by going first. Would be great to see a formal statement of your intended argument here, and even better, formal re-statements of your past posts on philosophical topics.
The rest of your post is decent, but this made me scratch my head. What are you trying to say?
I was thinking about this as a problem of so-called social epistemology -specifically, of what a person ought to believe when her or his beliefs contradict someone else’s. It seems obvious to me that -other things being equal- the rational approach to take when encountering someone who appears rational and well-informed and who disagrees with you, is to take seriously that person’s thoughts. Since the abstract authors fit the description, it’s obvious, I think, that what they say deserves at least some consideration -ie, what they say is not obviously worthless (ie., to be dismissed with a one liner).
Is this fair?
I realize the situation is more complicated here, as there’s the question whether a whole discipline has gone off the rails, which I think the OP has convinced himself is the case with philosophy (so, maybe other things aren’t equal). I’ve tried a few times without success to recommend some epistemic humility on this point.