Firstly, I think that luke simply has a very different idea of what philosophy ought to be doing compared to most philosophers. For example, most philosophers think that doing a fair amount of what is (more or less explicitly) History of Philosophy is a) of independent interest b) useful for training new philosophers and c) potentially fruitful.
I’m not terribly convinced by a), I have some sympathy with b) (many classic philosophers are surprisingly convincing and it’s worth taking the time to figure out why they’re wrong), and I strongly disagree with c) (if they had good insights, there should be better presentations of them by now!). I think the disagreement about a) is the most important, however, as it indicates a simple difference in what people are trying to do with philosophy.
On that ground it just seems childish of luke to criticise Article #2 on the grounds that it’s really history: of course it is, that’s part of what philosophy departments do. So luke wants to change the way philosophy tends to be done, fine, but it’s churlish to assume that that’s the way things already are and that the current practitioners are just bad at it.
Secondly, I think I disagree with luke about what a lot of philosophy is trying to do. Luke finds a lot of so-called “linguistic” philosophy frustrating because he doesn’t feel it solves problems that are “out there”. I’d say that it’s not trying to. The clearest way I can think of to put it is like this: philosophy is often trying to solve the problems that ordinary people come up against when they use words. In that situation it’s highly relevant to find out, say, what they mean by the word “knowledge”, as otherwise your answers will have no relevance to the epistemic concepts they actually use.
Philosophers aren’t trying to build an AI, so they’re not usually so interested in the ideal epistemology. They’re interested in what humans are doing. And that involves a lot of probing the language that humans use. In particular, the much-maligned thought-experiments and “intuitions” are actually perfectly respectable data about what the author, as a competent language-user thinks about the words in question (which is what the author in article #3 is presumably trying to do in a specialised way). I think it’s a confusion to think that thought-experiments are meant to tell us about the deep structure of the world! (admittedly, this is a mistake that is made by some philosophers!)
Basically, luke wants to do something completely different to most philosophers, and so is confused that they don’t seem to be doing what he wants them to do.
Couple of other things:
For the record, I think that plenty of philosophers write lots of bullshit, but then so does everyone else. Philosophy is hard, people go astray.
Article #4… it’s discussing some of the potential implications of atheism with regards to people’s responses to various artworks. What’s so problematic?
What would an ideal epistemology be? I’m not asking for the ideal epistemology itself, but just how could you tell whether you’d developed one? Or if you were at least getting closer to it?
It kind of depends what you mean by “epistemology”. I was cheating a bit when I said that: many philosophers seem to think that epistemology is simply about studying the concept of knowledge as used by human beings. However, you might also think that perhaps what we’re really interested in is how to get useful information about the world.
In that case the human concept of “knowledge” seems pretty shitty: it’s binary, and has a whole host of subtle complications of usage. Whereas something like a Bayesian approach seems much better.
So I’m claiming that philosophers aren’t necessarily interested in the latter kind of epistemology; they’re interested in “knowledge” as most humans use it, rather than whatever epistemic concepts you would build into a new agent!
‘Ideal’ is underdetermined here, but we could give it content. I can imagine four basic families of ways to evaluate an epistemology (in addition to combinations):
Territorial: How useful is the epistemology for causing agents to consistently assert truths and deny falsehoods?
Epistemically Rational: How useful is the epistemology for causing agents to believe things in proportion to the strength of the available evidence? This may be a special case of the territorial evaluation, defined so as to exclude gerrymandered epistemologies that only help their agents by coincidence.
Instrumentally Rational: How useful is the epistemology for causing agents employing it to attain their personal goals?
Moral: How useful is the epistemology for satisfying everyone’s preferences, including the preferences of people who may not subscribe to the epistemology themselves?
What would an ideal epistemology be? I’m not asking for the ideal epistemology itself, but just how could you tell whether you’d developed one? Or if you were at least getting closer to it?
Philosophers aren’t trying to build an AI, so they’re not usually so interested in the ideal epistemology.
That’s a terribly inadequate reason to be uninterested in the ideal epistemology. Luckily many philosophers do seem to be quite interested in it; still, like Luke, I wish there were more.
I think (b) can be quite useful, for the reason you described. IMO it’s useful in physics, as well, because it lets the student reproduce (or at least read about) the experiments that led to our current understanding of the world. For example, are subatomic particles evenly distributed throughout a piece of metal (and, indeed, all matter) ? It’s easy enough to answer “no”, but it’s much more important to discover how the answer was found. Even though this answer itself was pretty far from the truth.
So I have a couple of problems with this post.
Firstly, I think that luke simply has a very different idea of what philosophy ought to be doing compared to most philosophers. For example, most philosophers think that doing a fair amount of what is (more or less explicitly) History of Philosophy is a) of independent interest b) useful for training new philosophers and c) potentially fruitful.
I’m not terribly convinced by a), I have some sympathy with b) (many classic philosophers are surprisingly convincing and it’s worth taking the time to figure out why they’re wrong), and I strongly disagree with c) (if they had good insights, there should be better presentations of them by now!). I think the disagreement about a) is the most important, however, as it indicates a simple difference in what people are trying to do with philosophy.
On that ground it just seems childish of luke to criticise Article #2 on the grounds that it’s really history: of course it is, that’s part of what philosophy departments do. So luke wants to change the way philosophy tends to be done, fine, but it’s churlish to assume that that’s the way things already are and that the current practitioners are just bad at it.
Secondly, I think I disagree with luke about what a lot of philosophy is trying to do. Luke finds a lot of so-called “linguistic” philosophy frustrating because he doesn’t feel it solves problems that are “out there”. I’d say that it’s not trying to. The clearest way I can think of to put it is like this: philosophy is often trying to solve the problems that ordinary people come up against when they use words. In that situation it’s highly relevant to find out, say, what they mean by the word “knowledge”, as otherwise your answers will have no relevance to the epistemic concepts they actually use.
Philosophers aren’t trying to build an AI, so they’re not usually so interested in the ideal epistemology. They’re interested in what humans are doing. And that involves a lot of probing the language that humans use. In particular, the much-maligned thought-experiments and “intuitions” are actually perfectly respectable data about what the author, as a competent language-user thinks about the words in question (which is what the author in article #3 is presumably trying to do in a specialised way). I think it’s a confusion to think that thought-experiments are meant to tell us about the deep structure of the world! (admittedly, this is a mistake that is made by some philosophers!)
Basically, luke wants to do something completely different to most philosophers, and so is confused that they don’t seem to be doing what he wants them to do.
Couple of other things:
For the record, I think that plenty of philosophers write lots of bullshit, but then so does everyone else. Philosophy is hard, people go astray.
Article #4… it’s discussing some of the potential implications of atheism with regards to people’s responses to various artworks. What’s so problematic?
What would an ideal epistemology be? I’m not asking for the ideal epistemology itself, but just how could you tell whether you’d developed one? Or if you were at least getting closer to it?
It kind of depends what you mean by “epistemology”. I was cheating a bit when I said that: many philosophers seem to think that epistemology is simply about studying the concept of knowledge as used by human beings. However, you might also think that perhaps what we’re really interested in is how to get useful information about the world.
In that case the human concept of “knowledge” seems pretty shitty: it’s binary, and has a whole host of subtle complications of usage. Whereas something like a Bayesian approach seems much better.
So I’m claiming that philosophers aren’t necessarily interested in the latter kind of epistemology; they’re interested in “knowledge” as most humans use it, rather than whatever epistemic concepts you would build into a new agent!
‘Ideal’ is underdetermined here, but we could give it content. I can imagine four basic families of ways to evaluate an epistemology (in addition to combinations):
Territorial: How useful is the epistemology for causing agents to consistently assert truths and deny falsehoods?
Epistemically Rational: How useful is the epistemology for causing agents to believe things in proportion to the strength of the available evidence? This may be a special case of the territorial evaluation, defined so as to exclude gerrymandered epistemologies that only help their agents by coincidence.
Instrumentally Rational: How useful is the epistemology for causing agents employing it to attain their personal goals?
Moral: How useful is the epistemology for satisfying everyone’s preferences, including the preferences of people who may not subscribe to the epistemology themselves?
This is a good question for Eliezer Yudkowsky, since he seems to think Objective Bayesianism is it.
That’s a terribly inadequate reason to be uninterested in the ideal epistemology. Luckily many philosophers do seem to be quite interested in it; still, like Luke, I wish there were more.
I think (b) can be quite useful, for the reason you described. IMO it’s useful in physics, as well, because it lets the student reproduce (or at least read about) the experiments that led to our current understanding of the world. For example, are subatomic particles evenly distributed throughout a piece of metal (and, indeed, all matter) ? It’s easy enough to answer “no”, but it’s much more important to discover how the answer was found. Even though this answer itself was pretty far from the truth.