Yes, that’s what most Quinean naturalists are doing...
Can I expect a reply to my claim that a central statement of your above comment was both clearly false and misrepresented Quinean naturalism? I hope so. I’m also still curious to hear your response to the specific example I’ve now given several times of how even non-naturalistic philosophy can provide useful insights that bear directly on your work on Friendly AI (the “extrapolation” bit).
As for expecting naturalistic philosophy to teach very bad habits of thought: That has some plausibility. But it is hard to argue about with any precision. What’s the cost/benefit analysis on reading naturalistic philosophy after having undergone significant LW-rationality training? I don’t know.
But I will point out that reading naturalistic philosophy (1) deconverted me from fundamentalist Christianity, (2) led me to reject most of standard analytic philosophy, (3) led me to almost all of the “standard” (in the sense I intended above) LW positions, and (4) got me reading and loving Epistemology and the Psychology of Human Judgment and Good and Real (two philosophy books that could just as well be a series of Less Wrong blog posts) - all before I started regularly reading Less Wrong.
So… it’s not always bad. :)
Also, I suspect your recommendation to not read naturalistic, reductionistic philosophy outside of Less Wrong feels very paternalistic and cultish to me, and I have a negative emotional (and perhaps rational) reaction to the suggestion that people should only get their philosophy from a single community.
Can I expect a reply to my claim that a central statement of your above comment was both clearly false and misrepresented Quinean naturalism?
Reply to charge that it is clearly false: Sorry, it doesn’t look clearly false to me. It seems to me that people can get along just fine knowing only what philosophy they pick up from reading AI books.
Reply to charge that it misrepresented Quinean naturalism: Give me an example of one philosophical question they dissolved into a cognitive algorithm. Please don’t link to a book on Amazon where I click “Surprise me” ten times looking for a dissolution and then give up. Just tell me the question and sketch the algorithm.
The CEV article’s “conflation” is not a convincing example. I was talking about the distinction between terminal and instrumental value way back in 2001, though I made the then-usual error of using nonstandard terminology. I left that distinction out of CEV specifically because (a) I’d seen it generate cognitive errors in people who immediately went funny in the head as soon as they were introduced to the concept of top-level values, and (b) because the original CEV paper wasn’t supposed to go down to the level of detail of ordering expected-consequence updates versus moral-argument-processing updates.
On whether people can benefit from reading philosophy outside of Less Wrong and AI books, we simply disagree.
Your response on misrepresenting Quinean naturalism did not reply to this part: “Quinean naturalists don’t just discuss the fact that cognitive biases affect philosophers. Quinean naturalists also discuss how to do philosophy amidst the influence of cognitive biases. That very question is a major subject of your writing on Less Wrong, so I doubt you see no value in it.”
As for an example of dissolving certain questions into cognitive algorithms, I’m drafting up a post on that right now. (Actually, the current post was written as a dependency for the other post I’m writing.)
On CEV and extrapolation: You seem to agree that the distinction is useful, because you’ve used it yourself elsewhere (you just weren’t going into so much detail in the CEV paper). But that seems to undermine your point that valuable insights are not to be found in mainstream philosophy. Or, maybe that’s not your claim. Maybe your claim is that all the valuable insights of mainstream philosophy happen to have already shown up on Less Wrong and in AI textbooks. Either way, I once again simply disagree.
I doubt that you picked up all the useful philosophy you have put on Less Wrong exclusively from AI books.
I agree about philosophy and actually I feel similar about the LW style rationality, for my value of real work (engineering mostly, with some art and science). Your tricks burden the tree search, and also easily lead to wrong order of branch processing as the ‘biases’ for effective branch processing are either disabled or worst of all negated, before a substitute is devised.
If you want to form a belief about, for example, FAI, it’s all nice that you don’t feel that the morality can result from some simple principles. If you want to build FAI—this branch (the generated morality that we agree with) is much much lower while it’s probability of success, really, isn’t that much worse, as the long, hand wavy argument has many points of possible failure and low reliability. Then, there’s still no immunity against fallacies. The worst form of sunk cost fallacy is disregard for possibility of better solution after the cost has been sunk. That’s what destroys corporations after they sink costs. They don’t even pursue cost-recovery option when it doesn’t coincide with prior effort and only utilizes part of prior effort.
Because I expect it to teach very bad habits of thought that will lead people to be unable to do real work. Assume naturalism! Move on! NEXT!
Yes, that’s what most Quinean naturalists are doing...
Can I expect a reply to my claim that a central statement of your above comment was both clearly false and misrepresented Quinean naturalism? I hope so. I’m also still curious to hear your response to the specific example I’ve now given several times of how even non-naturalistic philosophy can provide useful insights that bear directly on your work on Friendly AI (the “extrapolation” bit).
As for expecting naturalistic philosophy to teach very bad habits of thought: That has some plausibility. But it is hard to argue about with any precision. What’s the cost/benefit analysis on reading naturalistic philosophy after having undergone significant LW-rationality training? I don’t know.
But I will point out that reading naturalistic philosophy (1) deconverted me from fundamentalist Christianity, (2) led me to reject most of standard analytic philosophy, (3) led me to almost all of the “standard” (in the sense I intended above) LW positions, and (4) got me reading and loving Epistemology and the Psychology of Human Judgment and Good and Real (two philosophy books that could just as well be a series of Less Wrong blog posts) - all before I started regularly reading Less Wrong.
So… it’s not always bad. :)
Also, I suspect your recommendation to not read naturalistic, reductionistic philosophy outside of Less Wrong feels very paternalistic and cultish to me, and I have a negative emotional (and perhaps rational) reaction to the suggestion that people should only get their philosophy from a single community.
Reply to charge that it is clearly false: Sorry, it doesn’t look clearly false to me. It seems to me that people can get along just fine knowing only what philosophy they pick up from reading AI books.
Reply to charge that it misrepresented Quinean naturalism: Give me an example of one philosophical question they dissolved into a cognitive algorithm. Please don’t link to a book on Amazon where I click “Surprise me” ten times looking for a dissolution and then give up. Just tell me the question and sketch the algorithm.
The CEV article’s “conflation” is not a convincing example. I was talking about the distinction between terminal and instrumental value way back in 2001, though I made the then-usual error of using nonstandard terminology. I left that distinction out of CEV specifically because (a) I’d seen it generate cognitive errors in people who immediately went funny in the head as soon as they were introduced to the concept of top-level values, and (b) because the original CEV paper wasn’t supposed to go down to the level of detail of ordering expected-consequence updates versus moral-argument-processing updates.
Thanks for your reply.
On whether people can benefit from reading philosophy outside of Less Wrong and AI books, we simply disagree.
Your response on misrepresenting Quinean naturalism did not reply to this part: “Quinean naturalists don’t just discuss the fact that cognitive biases affect philosophers. Quinean naturalists also discuss how to do philosophy amidst the influence of cognitive biases. That very question is a major subject of your writing on Less Wrong, so I doubt you see no value in it.”
As for an example of dissolving certain questions into cognitive algorithms, I’m drafting up a post on that right now. (Actually, the current post was written as a dependency for the other post I’m writing.)
On CEV and extrapolation: You seem to agree that the distinction is useful, because you’ve used it yourself elsewhere (you just weren’t going into so much detail in the CEV paper). But that seems to undermine your point that valuable insights are not to be found in mainstream philosophy. Or, maybe that’s not your claim. Maybe your claim is that all the valuable insights of mainstream philosophy happen to have already shown up on Less Wrong and in AI textbooks. Either way, I once again simply disagree.
I doubt that you picked up all the useful philosophy you have put on Less Wrong exclusively from AI books.
I agree about philosophy and actually I feel similar about the LW style rationality, for my value of real work (engineering mostly, with some art and science). Your tricks burden the tree search, and also easily lead to wrong order of branch processing as the ‘biases’ for effective branch processing are either disabled or worst of all negated, before a substitute is devised.
If you want to form a belief about, for example, FAI, it’s all nice that you don’t feel that the morality can result from some simple principles. If you want to build FAI—this branch (the generated morality that we agree with) is much much lower while it’s probability of success, really, isn’t that much worse, as the long, hand wavy argument has many points of possible failure and low reliability. Then, there’s still no immunity against fallacies. The worst form of sunk cost fallacy is disregard for possibility of better solution after the cost has been sunk. That’s what destroys corporations after they sink costs. They don’t even pursue cost-recovery option when it doesn’t coincide with prior effort and only utilizes part of prior effort.