It would be odd to suggest that the progress in mainstream philosophy that Less Wrong has already made use of would suddenly stop, justifying a choice to ignore mainstream philosophy in the future.
Given that your audience at least in some sense disagrees, you’d do well to use a more powerful argument than “it would be odd” (it would be a fine argument if you expected the audience’s intuitions to align with the statement, but it’s apparently not the case), especially given that your position suggests how to construct one: find an insight generated by mainstream philosophy that would be considered new and useful on LW (which would be most effective if presented/summarized in LW language), and describe the process that allowed you to find it in the literature.
On a separate note, I think finding a place for LW rationality in academic philosophy might be a good thing, but this step should be distinguished from the connotation that brings about usefulness of (closely located according to this placement) academic philosophy.
So, I agree denotationally with your post (along the lines of what you listed in this comment), but still disagree connotationally with the implication that standard philosophy is of much use (pending arguments that convince me otherwise, the disagreement itself is not that strong). I disagree strongly about the way in which this connotation feels to argue its case through this post, not presenting arguments that under its own assumptions should be available. I understand that you were probably unaware of this interpretation of your post (i.e. arguing for mainstream philosophy being useful, as opposed to laying out some groundwork in preparation for such argument), or consider it incorrect, but I would argue that you should’ve anticipated it and taken into account.
(I expect if you add a note at the beginning of the post to the effect that the point of this particular post is to locate LW philosophy in mainstream philosophy, perhaps to point out priority for some of the ideas, and edit the rest with that in mind, the connotational impact would somewhat dissipate, without changing the actual message. But given the discussion that has already taken place, it might be not worth doing.)
But I am curious to discuss this with someone who doesn’t find it odd that mainstream philosophy could make useful contributions up until a certain point and then suddenly stop. That’s far from impossible, but I’d be curious to know what you think was cause the stop in useful progress. And when did that supposedly happen? In the 1960′s, after philosophy’s predicate logic and Tarskian truth-conditional theories of language were mature? In the 1980s? Around 2000?
The inability of philosophers to settle on a position on an issue and move on. It’s very difficult to make progress (ie additional useful contributions) if your job depends, not on moving forwards and generating new insights, but rather on going back and forth over old arguments. People like, e.g. Yudkowsky, whose job allows/requires him to devote almost all of his time to new research, would be much more productive- possibly, depending on the philosopher and non-philosopher in question, so much more productive that going back over philosophical arguments and positions isn’t very useful.
The time would depend on the field in question, of course; I’m no expert, but from an outsider’s perspective I feel like, e.g. linguistics and logic have had much more progress in recent decades than, e.g. philosophical consciousness studies or epistemology. (Again, no expert.) However, again, my view is less that useful philosophical contributions have stopped, and more that they’ve slowed to a crawl.
This is indeed why most philosophy is useless. But I’ve asserted that most philosophy is useless for a long time. This wouldn’t explain why philosophy would nevertheless make useful progress up until the 60s or 80s or 2000s and then suddenly stop. That suggestion remains to be explained.
(My apologies; I didn’t fully understand what you were asking for.)
First, it doesn’t claim that philosophy makes zero progress, just that science/AI research/etc. makes more. There were still broad swathes of knowledge (e.g. linguistics and psychology) that split off relatively late from philosophy, and in which philosophers were still making significant progress right up to the point where the point where they became sciences.
Second, philosophy has either been motivated by or freeriding off of science and math (e.g. to use your example, Frege’s development of predicate logic was motivated by his desire to place math on a more secure foundation.) But the main examples (that are generally cited elsewhere, at least) of modern integration or intercourse between philosophy and science/math/AI (e.g. Dennett, Drescher,, Pearl, etc.) have already been considered, so it’s reasonable to say that mainstream philosophy probably doesn’t have very much more to offer, let alone a “centralized repository of reductionist-grade naturalistic cognitive philosophy” of the sort Yudkowsky et al. are looking for.
Third, the low-hanging fruit would have been taken first; because philosophy doesn’t settle points and move on to entire new search spaces, it would get increasingly difficult to find new, unexplored ideas. While they could technically have moved on to explore new ideas anyways, it’s more difficult than sticking to established debates, feels awkward, and often leads people to start studying things not considered part of philosophy (e.g. Noam Chomsky or, to an extent, Alonzo Church.) Therefore, innovation/research would slow down as time went on. (And where philosophers have been willing to go out ahead and do completely original thinking, even where they’re not very influenced by science, LW has seemed to integrate their thinking; e.g. Parfit.)
(Btw, I don’t think anybody is claiming that all progress in philosophy had stopped; indeed, I explicitly stated that I thought that it hadn’t. I’ve already given four examples above of philosophers doing innovative work useful for LW.)
Yeah, I’m not sure we disagree on much. As you say, Less Wrong has already made use of some of the best of mainstream philosophy, though I think there’s still more to be gleaned.
That’s far from impossible, but I’d be curious to know what you think was cause the stop in useful progress. And when did that supposedly happen?
Just now. As of today, I don’t expect to find useful stuff that I don’t already know in mainstream philosophy already written, commensurate with the effort necessary to dig it up (this situation could be improved by reducing the necessary effort, if there is indeed something in there to find). The marginal value of learning more existing math or cognitive science or machine learning for answering the same (philosophical) questions is greater. But future philosophy will undoubtedly bring new good insights, in time, absent defeaters.
So maybe your argument is not that mainstream philosophy has nothing useful to offer but instead just that it would take you more effort to dig it up than it’s worth? If so, I find that plausible. Like I said, I don’t think Eliezer should spend his time digging through mainstream philosophy. Digging through math books and AI books will be much more rewarding. I don’t know what your fields of expertise are, but I suspect digging through mainstream philosophy would not be the best use of your time, either.
So maybe your argument is not that mainstream philosophy has nothing useful to offer but instead just that it would take you more effort to dig it up than it’s worth?
I don’t believe that for the purposes of development of human rationality or FAI theory this should be on anyone’s worth-doing list for some time yet, before we can afford this kind of specialization to go after low-probability perks.
I expect that there is no existing work coming from philosophy useful-in-itself to an extent similar to Drescher’s Good and Real (and Drescher is/was an AI researcher), although it’s possible and it would be easy to make such work known to the community once it’s discovered. People on the lookout for these things could be useful.
I expect that reading a lot of related philosophy with a prepared mind (so that you don’t catch an anti-epistemic cold or death) would refine one’s understanding of many philosophical questions, but mostly not in the form of modular communicable insights, and not to a great degree (compared to background training from spending the same time studying math/AI, that is ways of thinking you learn apart from the subject matter). This limits the extent to which people specializing in studying potentially relevant philosophy can contribute.
The low-hanging fruit is already gathered. That list (outside of AI/decision theory references) looks useful for discussing questions of priority and for gathering real-world data (where it refers to psychological experiments). Bostrom’s group and Drescher’s and Pearl’s work we already know, pointing these out is not a clear example of potential fruits of the quest for scholarship in philosophy (confusingly enough, but keep in mind the low-hanging fruit part, and the means for finding these being unrelated to scholarship in philosophy; also, being on the lookout for self-contained significant useful stuff is the kind of activity I was more optimistic about in my comment).
I don’t get it. When low-hanging fruit is covered on Less Wrong, it’s considered useful stuff. When low-hanging fruit comes from mainstream philosophy, it supposedly doesn’t help show that mainstream philosophy is useful. If that’s what’s going on, it’s a double standard, and a desperate attempt to “show” that mainstream philosophy isn’t useful.
Also, saying “Well, we already know about lots of mainstream philosophy that’s useful” is direct support for the central claim of my original post: That mainstream philosophy can be useful and shouldn’t be ignored.
Most of the stuff already written on Less Wrong is not useful to the present me in the same sense as philosophy isn’t, because I already learned what I expected to be the useful bits. I won’t be going on a quest for scholarship in Less Wrong either. And if I need to prepare an apprentice, I would give them some LW sequences and Good and Real first (on the philosophy side), and looking through mainstream philosophy won’t come up for a long time.
These two use cases are the ones that matter to me, what use case did you think about? Just intuitive “usefulness” is too unclear.
I agree that mainstream philosophy is far from the first or most important thing one can study.
The use case I’m particularly focused on is machine ethics for self-modifying superintelligence. That draws on a huge host of issues discussed at length in the mainstream literature, including much of the material I listed below, and also stuff I haven’t mentioned yet on the problems with reflective equilibrium (which CEV uses), consequentialism, and so on.
The use case I’m particularly focused on is machine ethics for self-modifying superintelligence.
Well, I don’t share your expectation to learn useful stuff (on the philosophy side) about that, what you won’t find in AI textbooks, metaethics sequence, FHI papers, Good and Real, and other sources already located.
But some of the sources you just listed are from mainstream philosophy...
Again, location of those sources was not (and were it otherwise, could well be not) a product of scholarship in mainstream philosophy, which subtracts from expectation of usefulness of the activity of reading new unknown stuff, which is an altogether different enterprise from reading the stuff that’s already known to be useful.
Also, I’m working on some stuff with regard to machine ethics for superintelligence, so I’ll be curious to find out if you find that as useful as well.
Do you mean, would I find your survey papers/book useful?
Probably not for me, maybe useful as material for new people to study, since it’s focused on this particular problem and so could collect the best relevant things you’ll find, depending on your standard of quality/relevance in selecting things to discuss. From what I saw of your first drafts and other articles, it’ll probably look more like a broad eclectic survey than useful-for-study lecture notes, which subtracts from that use case (but who knows).
Could catalyze conversation in academia or elsewhere though, or work as standard reference node for when you’re in a hurry and don’t want to dereference it.
(Compare with Chalmers’ paper, which is all fine in the general outline, generates a citation node, allows to introduce people from particular background to motivation for AGI-risks-related discussion, and has already initiated discussion in academia. But it’s not useful as study material, given available alternatives, nor does it say anything new.)
Again, location of those sources was not… a product of scholarship in mainstream philosophy...
I think we agree on this so I’ll drop it. My original post claimed that mainstream philosophy makes useful contributions and should not be ignored, and you agree. We also agree that poring through the resources of mainstream philosophy is not the best use for pretty much anyone’s time.
As for my forthcoming work on machine ethics for superintelligence...
maybe useful as material for new people to study
Yep. I want to write short, broad, well-cited overviews of the subjects relevant to Friendly AI, something that mostly has not yet been done.
Could catalyze conversation in academia or elsewhere
Yes.
[could] work as standard reference node for when you’re in a hurry
Right.
You’ve hit on most of the immediate goals of such work, though eventually my intention is to contribute to more of the cutting-edge stuff on Friendly AI, for example on how reflective equilibrium could be programmatically implemented in CEV. But that’s getting ahead of myself. Also, it’s doubtful that such work will actually materialize, because of the whole ‘not being independently wealthy’ problem I have. Research takes time, and I’ve got rent to pay.
What’s the low-hanging fruit mixed with? If I have a concentrated basket of low-hanging fruit, I call that an introductory textbook and I eat it. Extending the tortured metaphor, if I find too much bad fruit in the same basket, I shop for the same fruit at a different store.
Given that your audience at least in some sense disagrees, you’d do well to use a more powerful argument than “it would be odd” (it would be a fine argument if you expected the audience’s intuitions to align with the statement, but it’s apparently not the case), especially given that your position suggests how to construct one: find an insight generated by mainstream philosophy that would be considered new and useful on LW (which would be most effective if presented/summarized in LW language), and describe the process that allowed you to find it in the literature.
On a separate note, I think finding a place for LW rationality in academic philosophy might be a good thing, but this step should be distinguished from the connotation that brings about usefulness of (closely located according to this placement) academic philosophy.
So, I agree denotationally with your post (along the lines of what you listed in this comment), but still disagree connotationally with the implication that standard philosophy is of much use (pending arguments that convince me otherwise, the disagreement itself is not that strong). I disagree strongly about the way in which this connotation feels to argue its case through this post, not presenting arguments that under its own assumptions should be available. I understand that you were probably unaware of this interpretation of your post (i.e. arguing for mainstream philosophy being useful, as opposed to laying out some groundwork in preparation for such argument), or consider it incorrect, but I would argue that you should’ve anticipated it and taken into account.
(I expect if you add a note at the beginning of the post to the effect that the point of this particular post is to locate LW philosophy in mainstream philosophy, perhaps to point out priority for some of the ideas, and edit the rest with that in mind, the connotational impact would somewhat dissipate, without changing the actual message. But given the discussion that has already taken place, it might be not worth doing.)
No, I didn’t take the time to make an argument.
But I am curious to discuss this with someone who doesn’t find it odd that mainstream philosophy could make useful contributions up until a certain point and then suddenly stop. That’s far from impossible, but I’d be curious to know what you think was cause the stop in useful progress. And when did that supposedly happen? In the 1960′s, after philosophy’s predicate logic and Tarskian truth-conditional theories of language were mature? In the 1980s? Around 2000?
The inability of philosophers to settle on a position on an issue and move on. It’s very difficult to make progress (ie additional useful contributions) if your job depends, not on moving forwards and generating new insights, but rather on going back and forth over old arguments. People like, e.g. Yudkowsky, whose job allows/requires him to devote almost all of his time to new research, would be much more productive- possibly, depending on the philosopher and non-philosopher in question, so much more productive that going back over philosophical arguments and positions isn’t very useful.
The time would depend on the field in question, of course; I’m no expert, but from an outsider’s perspective I feel like, e.g. linguistics and logic have had much more progress in recent decades than, e.g. philosophical consciousness studies or epistemology. (Again, no expert.) However, again, my view is less that useful philosophical contributions have stopped, and more that they’ve slowed to a crawl.
This is indeed why most philosophy is useless. But I’ve asserted that most philosophy is useless for a long time. This wouldn’t explain why philosophy would nevertheless make useful progress up until the 60s or 80s or 2000s and then suddenly stop. That suggestion remains to be explained.
(My apologies; I didn’t fully understand what you were asking for.)
First, it doesn’t claim that philosophy makes zero progress, just that science/AI research/etc. makes more. There were still broad swathes of knowledge (e.g. linguistics and psychology) that split off relatively late from philosophy, and in which philosophers were still making significant progress right up to the point where the point where they became sciences.
Second, philosophy has either been motivated by or freeriding off of science and math (e.g. to use your example, Frege’s development of predicate logic was motivated by his desire to place math on a more secure foundation.) But the main examples (that are generally cited elsewhere, at least) of modern integration or intercourse between philosophy and science/math/AI (e.g. Dennett, Drescher,, Pearl, etc.) have already been considered, so it’s reasonable to say that mainstream philosophy probably doesn’t have very much more to offer, let alone a “centralized repository of reductionist-grade naturalistic cognitive philosophy” of the sort Yudkowsky et al. are looking for.
Third, the low-hanging fruit would have been taken first; because philosophy doesn’t settle points and move on to entire new search spaces, it would get increasingly difficult to find new, unexplored ideas. While they could technically have moved on to explore new ideas anyways, it’s more difficult than sticking to established debates, feels awkward, and often leads people to start studying things not considered part of philosophy (e.g. Noam Chomsky or, to an extent, Alonzo Church.) Therefore, innovation/research would slow down as time went on. (And where philosophers have been willing to go out ahead and do completely original thinking, even where they’re not very influenced by science, LW has seemed to integrate their thinking; e.g. Parfit.)
(Btw, I don’t think anybody is claiming that all progress in philosophy had stopped; indeed, I explicitly stated that I thought that it hadn’t. I’ve already given four examples above of philosophers doing innovative work useful for LW.)
Yeah, I’m not sure we disagree on much. As you say, Less Wrong has already made use of some of the best of mainstream philosophy, though I think there’s still more to be gleaned.
Just now. As of today, I don’t expect to find useful stuff that I don’t already know in mainstream philosophy already written, commensurate with the effort necessary to dig it up (this situation could be improved by reducing the necessary effort, if there is indeed something in there to find). The marginal value of learning more existing math or cognitive science or machine learning for answering the same (philosophical) questions is greater. But future philosophy will undoubtedly bring new good insights, in time, absent defeaters.
So maybe your argument is not that mainstream philosophy has nothing useful to offer but instead just that it would take you more effort to dig it up than it’s worth? If so, I find that plausible. Like I said, I don’t think Eliezer should spend his time digging through mainstream philosophy. Digging through math books and AI books will be much more rewarding. I don’t know what your fields of expertise are, but I suspect digging through mainstream philosophy would not be the best use of your time, either.
I don’t believe that for the purposes of development of human rationality or FAI theory this should be on anyone’s worth-doing list for some time yet, before we can afford this kind of specialization to go after low-probability perks.
I expect that there is no existing work coming from philosophy useful-in-itself to an extent similar to Drescher’s Good and Real (and Drescher is/was an AI researcher), although it’s possible and it would be easy to make such work known to the community once it’s discovered. People on the lookout for these things could be useful.
I expect that reading a lot of related philosophy with a prepared mind (so that you don’t catch an anti-epistemic cold or death) would refine one’s understanding of many philosophical questions, but mostly not in the form of modular communicable insights, and not to a great degree (compared to background training from spending the same time studying math/AI, that is ways of thinking you learn apart from the subject matter). This limits the extent to which people specializing in studying potentially relevant philosophy can contribute.
Do you still think this was after reading my for starters list of mainstream philosophy contributions useful to Less Wrong? (below)
The low-hanging fruit is already gathered. That list (outside of AI/decision theory references) looks useful for discussing questions of priority and for gathering real-world data (where it refers to psychological experiments). Bostrom’s group and Drescher’s and Pearl’s work we already know, pointing these out is not a clear example of potential fruits of the quest for scholarship in philosophy (confusingly enough, but keep in mind the low-hanging fruit part, and the means for finding these being unrelated to scholarship in philosophy; also, being on the lookout for self-contained significant useful stuff is the kind of activity I was more optimistic about in my comment).
I don’t get it. When low-hanging fruit is covered on Less Wrong, it’s considered useful stuff. When low-hanging fruit comes from mainstream philosophy, it supposedly doesn’t help show that mainstream philosophy is useful. If that’s what’s going on, it’s a double standard, and a desperate attempt to “show” that mainstream philosophy isn’t useful.
Also, saying “Well, we already know about lots of mainstream philosophy that’s useful” is direct support for the central claim of my original post: That mainstream philosophy can be useful and shouldn’t be ignored.
Most of the stuff already written on Less Wrong is not useful to the present me in the same sense as philosophy isn’t, because I already learned what I expected to be the useful bits. I won’t be going on a quest for scholarship in Less Wrong either. And if I need to prepare an apprentice, I would give them some LW sequences and Good and Real first (on the philosophy side), and looking through mainstream philosophy won’t come up for a long time.
These two use cases are the ones that matter to me, what use case did you think about? Just intuitive “usefulness” is too unclear.
I agree that mainstream philosophy is far from the first or most important thing one can study.
The use case I’m particularly focused on is machine ethics for self-modifying superintelligence. That draws on a huge host of issues discussed at length in the mainstream literature, including much of the material I listed below, and also stuff I haven’t mentioned yet on the problems with reflective equilibrium (which CEV uses), consequentialism, and so on.
Well, I don’t share your expectation to learn useful stuff (on the philosophy side) about that, what you won’t find in AI textbooks, metaethics sequence, FHI papers, Good and Real, and other sources already located.
But some of the sources you just listed are from mainstream philosophy...
Also, I’m working on some stuff with regard to machine ethics for superintelligence, so I’ll be curious to find out if you find that useful as well.
Again, location of those sources was not (and were it otherwise, could well be not) a product of scholarship in mainstream philosophy, which subtracts from expectation of usefulness of the activity of reading new unknown stuff, which is an altogether different enterprise from reading the stuff that’s already known to be useful.
Do you mean, would I find your survey papers/book useful?
Probably not for me, maybe useful as material for new people to study, since it’s focused on this particular problem and so could collect the best relevant things you’ll find, depending on your standard of quality/relevance in selecting things to discuss. From what I saw of your first drafts and other articles, it’ll probably look more like a broad eclectic survey than useful-for-study lecture notes, which subtracts from that use case (but who knows).
Could catalyze conversation in academia or elsewhere though, or work as standard reference node for when you’re in a hurry and don’t want to dereference it.
(Compare with Chalmers’ paper, which is all fine in the general outline, generates a citation node, allows to introduce people from particular background to motivation for AGI-risks-related discussion, and has already initiated discussion in academia. But it’s not useful as study material, given available alternatives, nor does it say anything new.)
I think we agree on this so I’ll drop it. My original post claimed that mainstream philosophy makes useful contributions and should not be ignored, and you agree. We also agree that poring through the resources of mainstream philosophy is not the best use for pretty much anyone’s time.
As for my forthcoming work on machine ethics for superintelligence...
Yep. I want to write short, broad, well-cited overviews of the subjects relevant to Friendly AI, something that mostly has not yet been done.
Yes.
Right.
You’ve hit on most of the immediate goals of such work, though eventually my intention is to contribute to more of the cutting-edge stuff on Friendly AI, for example on how reflective equilibrium could be programmatically implemented in CEV. But that’s getting ahead of myself. Also, it’s doubtful that such work will actually materialize, because of the whole ‘not being independently wealthy’ problem I have. Research takes time, and I’ve got rent to pay.
What’s the low-hanging fruit mixed with? If I have a concentrated basket of low-hanging fruit, I call that an introductory textbook and I eat it. Extending the tortured metaphor, if I find too much bad fruit in the same basket, I shop for the same fruit at a different store.