I think that is a dangerous anti-epistemic meme. (Especially since calling the process of determining our values “assigning value to things” is misleading, though I get what you mean.) You use the art of rationality to determine what you value, and you use the art of rationality to determine how you should reflect on or change the process of determining what you value. Instrumental and epistemic rationality do not decouple nearly so easily.
If one can make mistakes in deciding what to value, what goals to set (in explicit reasoning or otherwise), then there is a place for pointing out that pursuing certain goals is an error (for so and so reasons), and a place for training to not make such errors and to perceive the reasons that point out why some goal is right or wrong.
As I’ve mentioned before, I hated the ‘arbitrary’ article, and most of the meta-ethics sequence. Value is arational, and nobody’s provided a coherent defense otherwise. You’re not discovering “rational value”, you’re discarding irrational instrumental values in a quest to achieve or discover arational terminal values.
Heh. And after looking up that link, I see it was you I was arguing with on this very same topic back then as well. Around and around we go...
This idea of “rational value” you think is incoherent is perhaps a straw-man. Let’s instead say that some people think that those methods you are using to discard instrumental values as irrational or find/endorse arational terminal values, might be generalized beyond what is obvious, might assume mistaken things, or might be an approximation of rules that are more explicitly justifiable.
For example, I think a lot of people use a simple line of reasoning like “okay, genetic evolution led me to like certain things, and memetic evolution led me to like other things, and maybe quirks of events that happened to me during development led me to like other things, and some of these intuitively seem more justified, or upon introspecting on them they feel more justified, or seem from the outside as if there would be more selection pressure for their existence so that probably means they’re the real values, …” and then basically stop thinking, or stop examining the intuitions they’re using to do that kind of thinking, or continue thinking but remain very confident in their thinking despite all of the known cognitive biases that make such thinking rather difficult.
Interestingly very few people ponder ontology of agency, or timeless control, or the complex relationship between disposition and justification, or spirituality and transpersonal psychology; and among the people who do ponder these things it seems to me that very few stop and think “wait, maybe I am more confused about morality than I had thought”. It seems rather unlikely to me that this is because humans have reached diminishing marginal returns in the field of meta-ethics.
My “straw-man” does appear to have defenders, though we seem to agree you aren’t one of them. I’ve admitted great confusion regarding ethics, morality, and meta-ethics, and I agree that rationality is one of the most powerful tools we have to dissect and analyze it.
Before rationality can be applied, there has to be something there to say ‘pick rationality’. Some other options might include intuition, astrology, life wisdom, or random walk.
You required a very narrow subset of possibilities (“valid tools for analyzing and dissecting”), so I’m sure the above options aren’t included in what you would expect; it seems to me that you’ve got an answer already and are looking for a superset.
Thanks for your reply. Reading the sentence “rationality is one of the most powerful tools we have to dissect and analyze [morality]” seemed to imply that you thought there were other “equally powerful” (powerful = reliably working) tools to arrive at true conclusions about morality.
As far as I’m concerned rationality is the whole superset, so I was curious about your take on it. And yes, your above options are surely not included in what I would consider to be “powerful tools to arrive at true conclusions”. Ultimately I think we don’t actually disagree about anything—just another “but does it really make a sound” pitfall.
My “straw-man” does appear to have defenders, though we seem to agree you aren’t one of them.
To some extent I am one such defender in the sense that I probably expect there to be a lot more of something like rationality to our values than you do. I was just saying that it’s not necessary for that to be the case. Either way the important thing is that values are in the territory where you can use rationality on them.
The point at which I think rationality enters our values is when those values are self-modifying, at which point you must provide a function for updating. Perhaps we only differ on the percentage we believe to be self-modifying.
Yeah, I should really stop linking to anything written by Eliezer. Putting it in my own words invariably leads to much better communication, and everyone is quite content to tear it apart should I misunderstand the slightest nuance of “established material”.
The link gave me a reason to think I had explained myself, when I obviously hadn’t included enough material to form a coherent comment. I know that what I’m thinking feels to be correct, and people do seem to agree with the core result, but I do not have the words to attempt to explain my thinking to you and correct it just now.
You use the art of rationality to determine what you value, and you use the art of rationality to determine how you should reflect on or change the process of determining what you value.
Why do you use the phrase “art of rationality”, as opposed to say, “philosophy”? Can you suggest a process for determine what you value, and show how it is related to things that are more typically associated with the word “rationality”, such as Bayesian updating and expected utility? Or is “art of rationality” meant to pretty much cover all of philosophy or at least “good” philosophy?
Primarily training of intuition to avoid known failure modes, implicit influence on the process of arriving at judgments, as compared to explicit procedures for pre- or post-processing interactions with it.
I haven’t found any system of thought besides LW-style rationality that would be sufficient to even start thinking about your values, and even LW-style rationality isn’t enough. More concretely, very few people know about illusion of introspection, evolutionary psychology, verbal overshadowing, the ‘thermodynamics of cognition’, revealed preference (and how ‘revealed’ doesn’t mean ‘actual’), cognitive biases, and in general that fundamental truth that you can’t believe everything you think. And more importantly, the practical and ingrained knowledge that things like those are always sitting there waiting to trip you up, if you don’t unpack your intuitions and think carefully about them. Of course I can’t suggest a process for determining what you value (or what you ‘should’ value) since that’s like the problem of the human condition, but I know that each one of those things I listed would most likely have to be accounted for in such a process.
Or is “art of rationality” meant to pretty much cover all of philosophy or at least “good” philosophy?
Hm… the way you say it makes me want to say “no, that would be silly and arrogant, of course I don’t think that”, but ya know I spent a fair amount of time using ‘philosophy’ before I came across Less Wrong, and it turns out philosophy, unlike rationality, just isn’t useful for answering the questions I care about. So, yeah, I’ll bite that bullet. The “art of rationality” covers “good” philosophy, since most philosophy sucks and what doesn’t suck has been absorbed. But that isn’t to say that LW-style philosophy hasn’t added a huge amount of content that makes the other stuff look weak by comparison.
(I should say, it’s not like something like LW-style rationality didn’t exist before; you, for instance, managed to find and make progress on interesting and important questions long before there were ‘sequences’. I’m not saying LW invented thinking. It’s just that the magic that people utilized to do better than traditional rationality was never really put down in a single place, as far as I know.)
I don’t disagree with what you write here, but I think if you say something like “You use the art of rationality to determine what you value” you’ll raise the expectation that there is already an art of rationality that can be used to determine what someone values, and then people will be disappointed when they look closer and find out that’s not the case.
Ah, I see your point. So the less misleading thing to say might be something roughly like: “We don’t yet know how to find or reason about our values, but we have notions of where we might start, and we can expect that whatever methods do end up making headway are going to have to be non-stupid in at least as many ways as our existing methods of solving hard problems are non-stupid.”
You can’t go all the way down the turtles to firmly put each one on the previous. Adopting axioms is always a bit arbitrary thing. No deeper axioms, then the deepest. Always quite arrationally put how they are.
You misunderstand, I’m not saying your values necessarily have to be ‘rational’ in some deep sense. I’m just saying that in order to figure out what your values might be, how they are related to each other, and what that means, you have to use something like rationality. I would also posit that in order to figure out what rocks are, how they are related to each other, and what that means, you have to use something like rationality. That obviously doesn’t mean that values or rocks are ‘rational’, but it might mean you can notice interesting things about them you wouldn’t have otherwise.
I think that is a dangerous anti-epistemic meme. (Especially since calling the process of determining our values “assigning value to things” is misleading, though I get what you mean.) You use the art of rationality to determine what you value, and you use the art of rationality to determine how you should reflect on or change the process of determining what you value. Instrumental and epistemic rationality do not decouple nearly so easily.
Probably nothing short of a good post focused on this single idea will change minds.
I didn’t even realize it was controversial.
Evolution created our core values > evolution is arational > our core values are arational.
I don’t disagree with the conclusion but the reasoning does not follow.
If one can make mistakes in deciding what to value, what goals to set (in explicit reasoning or otherwise), then there is a place for pointing out that pursuing certain goals is an error (for so and so reasons), and a place for training to not make such errors and to perceive the reasons that point out why some goal is right or wrong.
Also, if the goals set by evolution should indeed be seen as arbitrary on reflection, you should ignore them. But some of them are not (while others are).
As I’ve mentioned before, I hated the ‘arbitrary’ article, and most of the meta-ethics sequence. Value is arational, and nobody’s provided a coherent defense otherwise. You’re not discovering “rational value”, you’re discarding irrational instrumental values in a quest to achieve or discover arational terminal values.
Heh. And after looking up that link, I see it was you I was arguing with on this very same topic back then as well. Around and around we go...
This idea of “rational value” you think is incoherent is perhaps a straw-man. Let’s instead say that some people think that those methods you are using to discard instrumental values as irrational or find/endorse arational terminal values, might be generalized beyond what is obvious, might assume mistaken things, or might be an approximation of rules that are more explicitly justifiable.
For example, I think a lot of people use a simple line of reasoning like “okay, genetic evolution led me to like certain things, and memetic evolution led me to like other things, and maybe quirks of events that happened to me during development led me to like other things, and some of these intuitively seem more justified, or upon introspecting on them they feel more justified, or seem from the outside as if there would be more selection pressure for their existence so that probably means they’re the real values, …” and then basically stop thinking, or stop examining the intuitions they’re using to do that kind of thinking, or continue thinking but remain very confident in their thinking despite all of the known cognitive biases that make such thinking rather difficult.
Interestingly very few people ponder ontology of agency, or timeless control, or the complex relationship between disposition and justification, or spirituality and transpersonal psychology; and among the people who do ponder these things it seems to me that very few stop and think “wait, maybe I am more confused about morality than I had thought”. It seems rather unlikely to me that this is because humans have reached diminishing marginal returns in the field of meta-ethics.
My “straw-man” does appear to have defenders, though we seem to agree you aren’t one of them. I’ve admitted great confusion regarding ethics, morality, and meta-ethics, and I agree that rationality is one of the most powerful tools we have to dissect and analyze it.
What other valid tools for dissecting and analyzing morality are there again?
I’m not facetiously nit-picking, just wondering about your answer if there is one.
Before rationality can be applied, there has to be something there to say ‘pick rationality’. Some other options might include intuition, astrology, life wisdom, or random walk.
You required a very narrow subset of possibilities (“valid tools for analyzing and dissecting”), so I’m sure the above options aren’t included in what you would expect; it seems to me that you’ve got an answer already and are looking for a superset.
Thanks for your reply. Reading the sentence “rationality is one of the most powerful tools we have to dissect and analyze [morality]” seemed to imply that you thought there were other “equally powerful” (powerful = reliably working) tools to arrive at true conclusions about morality.
As far as I’m concerned rationality is the whole superset, so I was curious about your take on it. And yes, your above options are surely not included in what I would consider to be “powerful tools to arrive at true conclusions”. Ultimately I think we don’t actually disagree about anything—just another “but does it really make a sound” pitfall.
To some extent I am one such defender in the sense that I probably expect there to be a lot more of something like rationality to our values than you do. I was just saying that it’s not necessary for that to be the case. Either way the important thing is that values are in the territory where you can use rationality on them.
For reference, this point was discussed in this post:
The point at which I think rationality enters our values is when those values are self-modifying, at which point you must provide a function for updating. Perhaps we only differ on the percentage we believe to be self-modifying.
Evolution created our rationality > evolution is arational > our rationality is arational.
Genetic fallacy.
Yeah, I should really stop linking to anything written by Eliezer. Putting it in my own words invariably leads to much better communication, and everyone is quite content to tear it apart should I misunderstand the slightest nuance of “established material”.
What does the link have to do with it? There just isn’t any way to get from the two premises to the conclusion.
The link gave me a reason to think I had explained myself, when I obviously hadn’t included enough material to form a coherent comment. I know that what I’m thinking feels to be correct, and people do seem to agree with the core result, but I do not have the words to attempt to explain my thinking to you and correct it just now.
Why do you use the phrase “art of rationality”, as opposed to say, “philosophy”? Can you suggest a process for determine what you value, and show how it is related to things that are more typically associated with the word “rationality”, such as Bayesian updating and expected utility? Or is “art of rationality” meant to pretty much cover all of philosophy or at least “good” philosophy?
Primarily training of intuition to avoid known failure modes, implicit influence on the process of arriving at judgments, as compared to explicit procedures for pre- or post-processing interactions with it.
I haven’t found any system of thought besides LW-style rationality that would be sufficient to even start thinking about your values, and even LW-style rationality isn’t enough. More concretely, very few people know about illusion of introspection, evolutionary psychology, verbal overshadowing, the ‘thermodynamics of cognition’, revealed preference (and how ‘revealed’ doesn’t mean ‘actual’), cognitive biases, and in general that fundamental truth that you can’t believe everything you think. And more importantly, the practical and ingrained knowledge that things like those are always sitting there waiting to trip you up, if you don’t unpack your intuitions and think carefully about them. Of course I can’t suggest a process for determining what you value (or what you ‘should’ value) since that’s like the problem of the human condition, but I know that each one of those things I listed would most likely have to be accounted for in such a process.
Hm… the way you say it makes me want to say “no, that would be silly and arrogant, of course I don’t think that”, but ya know I spent a fair amount of time using ‘philosophy’ before I came across Less Wrong, and it turns out philosophy, unlike rationality, just isn’t useful for answering the questions I care about. So, yeah, I’ll bite that bullet. The “art of rationality” covers “good” philosophy, since most philosophy sucks and what doesn’t suck has been absorbed. But that isn’t to say that LW-style philosophy hasn’t added a huge amount of content that makes the other stuff look weak by comparison.
(I should say, it’s not like something like LW-style rationality didn’t exist before; you, for instance, managed to find and make progress on interesting and important questions long before there were ‘sequences’. I’m not saying LW invented thinking. It’s just that the magic that people utilized to do better than traditional rationality was never really put down in a single place, as far as I know.)
I don’t disagree with what you write here, but I think if you say something like “You use the art of rationality to determine what you value” you’ll raise the expectation that there is already an art of rationality that can be used to determine what someone values, and then people will be disappointed when they look closer and find out that’s not the case.
Ah, I see your point. So the less misleading thing to say might be something roughly like: “We don’t yet know how to find or reason about our values, but we have notions of where we might start, and we can expect that whatever methods do end up making headway are going to have to be non-stupid in at least as many ways as our existing methods of solving hard problems are non-stupid.”
You can’t go all the way down the turtles to firmly put each one on the previous. Adopting axioms is always a bit arbitrary thing. No deeper axioms, then the deepest. Always quite arrationally put how they are.
A process of discovery which uncovers things which came from where?
A process of change which determines the direction things should go based on what?
You misunderstand, I’m not saying your values necessarily have to be ‘rational’ in some deep sense. I’m just saying that in order to figure out what your values might be, how they are related to each other, and what that means, you have to use something like rationality. I would also posit that in order to figure out what rocks are, how they are related to each other, and what that means, you have to use something like rationality. That obviously doesn’t mean that values or rocks are ‘rational’, but it might mean you can notice interesting things about them you wouldn’t have otherwise.
I agree with this statement.
I’m sorry to have continued the argument when I was apparently unclear.