If one can make mistakes in deciding what to value, what goals to set (in explicit reasoning or otherwise), then there is a place for pointing out that pursuing certain goals is an error (for so and so reasons), and a place for training to not make such errors and to perceive the reasons that point out why some goal is right or wrong.
As I’ve mentioned before, I hated the ‘arbitrary’ article, and most of the meta-ethics sequence. Value is arational, and nobody’s provided a coherent defense otherwise. You’re not discovering “rational value”, you’re discarding irrational instrumental values in a quest to achieve or discover arational terminal values.
Heh. And after looking up that link, I see it was you I was arguing with on this very same topic back then as well. Around and around we go...
This idea of “rational value” you think is incoherent is perhaps a straw-man. Let’s instead say that some people think that those methods you are using to discard instrumental values as irrational or find/endorse arational terminal values, might be generalized beyond what is obvious, might assume mistaken things, or might be an approximation of rules that are more explicitly justifiable.
For example, I think a lot of people use a simple line of reasoning like “okay, genetic evolution led me to like certain things, and memetic evolution led me to like other things, and maybe quirks of events that happened to me during development led me to like other things, and some of these intuitively seem more justified, or upon introspecting on them they feel more justified, or seem from the outside as if there would be more selection pressure for their existence so that probably means they’re the real values, …” and then basically stop thinking, or stop examining the intuitions they’re using to do that kind of thinking, or continue thinking but remain very confident in their thinking despite all of the known cognitive biases that make such thinking rather difficult.
Interestingly very few people ponder ontology of agency, or timeless control, or the complex relationship between disposition and justification, or spirituality and transpersonal psychology; and among the people who do ponder these things it seems to me that very few stop and think “wait, maybe I am more confused about morality than I had thought”. It seems rather unlikely to me that this is because humans have reached diminishing marginal returns in the field of meta-ethics.
My “straw-man” does appear to have defenders, though we seem to agree you aren’t one of them. I’ve admitted great confusion regarding ethics, morality, and meta-ethics, and I agree that rationality is one of the most powerful tools we have to dissect and analyze it.
Before rationality can be applied, there has to be something there to say ‘pick rationality’. Some other options might include intuition, astrology, life wisdom, or random walk.
You required a very narrow subset of possibilities (“valid tools for analyzing and dissecting”), so I’m sure the above options aren’t included in what you would expect; it seems to me that you’ve got an answer already and are looking for a superset.
Thanks for your reply. Reading the sentence “rationality is one of the most powerful tools we have to dissect and analyze [morality]” seemed to imply that you thought there were other “equally powerful” (powerful = reliably working) tools to arrive at true conclusions about morality.
As far as I’m concerned rationality is the whole superset, so I was curious about your take on it. And yes, your above options are surely not included in what I would consider to be “powerful tools to arrive at true conclusions”. Ultimately I think we don’t actually disagree about anything—just another “but does it really make a sound” pitfall.
My “straw-man” does appear to have defenders, though we seem to agree you aren’t one of them.
To some extent I am one such defender in the sense that I probably expect there to be a lot more of something like rationality to our values than you do. I was just saying that it’s not necessary for that to be the case. Either way the important thing is that values are in the territory where you can use rationality on them.
The point at which I think rationality enters our values is when those values are self-modifying, at which point you must provide a function for updating. Perhaps we only differ on the percentage we believe to be self-modifying.
Yeah, I should really stop linking to anything written by Eliezer. Putting it in my own words invariably leads to much better communication, and everyone is quite content to tear it apart should I misunderstand the slightest nuance of “established material”.
The link gave me a reason to think I had explained myself, when I obviously hadn’t included enough material to form a coherent comment. I know that what I’m thinking feels to be correct, and people do seem to agree with the core result, but I do not have the words to attempt to explain my thinking to you and correct it just now.
Probably nothing short of a good post focused on this single idea will change minds.
I didn’t even realize it was controversial.
Evolution created our core values > evolution is arational > our core values are arational.
I don’t disagree with the conclusion but the reasoning does not follow.
If one can make mistakes in deciding what to value, what goals to set (in explicit reasoning or otherwise), then there is a place for pointing out that pursuing certain goals is an error (for so and so reasons), and a place for training to not make such errors and to perceive the reasons that point out why some goal is right or wrong.
Also, if the goals set by evolution should indeed be seen as arbitrary on reflection, you should ignore them. But some of them are not (while others are).
As I’ve mentioned before, I hated the ‘arbitrary’ article, and most of the meta-ethics sequence. Value is arational, and nobody’s provided a coherent defense otherwise. You’re not discovering “rational value”, you’re discarding irrational instrumental values in a quest to achieve or discover arational terminal values.
Heh. And after looking up that link, I see it was you I was arguing with on this very same topic back then as well. Around and around we go...
This idea of “rational value” you think is incoherent is perhaps a straw-man. Let’s instead say that some people think that those methods you are using to discard instrumental values as irrational or find/endorse arational terminal values, might be generalized beyond what is obvious, might assume mistaken things, or might be an approximation of rules that are more explicitly justifiable.
For example, I think a lot of people use a simple line of reasoning like “okay, genetic evolution led me to like certain things, and memetic evolution led me to like other things, and maybe quirks of events that happened to me during development led me to like other things, and some of these intuitively seem more justified, or upon introspecting on them they feel more justified, or seem from the outside as if there would be more selection pressure for their existence so that probably means they’re the real values, …” and then basically stop thinking, or stop examining the intuitions they’re using to do that kind of thinking, or continue thinking but remain very confident in their thinking despite all of the known cognitive biases that make such thinking rather difficult.
Interestingly very few people ponder ontology of agency, or timeless control, or the complex relationship between disposition and justification, or spirituality and transpersonal psychology; and among the people who do ponder these things it seems to me that very few stop and think “wait, maybe I am more confused about morality than I had thought”. It seems rather unlikely to me that this is because humans have reached diminishing marginal returns in the field of meta-ethics.
My “straw-man” does appear to have defenders, though we seem to agree you aren’t one of them. I’ve admitted great confusion regarding ethics, morality, and meta-ethics, and I agree that rationality is one of the most powerful tools we have to dissect and analyze it.
What other valid tools for dissecting and analyzing morality are there again?
I’m not facetiously nit-picking, just wondering about your answer if there is one.
Before rationality can be applied, there has to be something there to say ‘pick rationality’. Some other options might include intuition, astrology, life wisdom, or random walk.
You required a very narrow subset of possibilities (“valid tools for analyzing and dissecting”), so I’m sure the above options aren’t included in what you would expect; it seems to me that you’ve got an answer already and are looking for a superset.
Thanks for your reply. Reading the sentence “rationality is one of the most powerful tools we have to dissect and analyze [morality]” seemed to imply that you thought there were other “equally powerful” (powerful = reliably working) tools to arrive at true conclusions about morality.
As far as I’m concerned rationality is the whole superset, so I was curious about your take on it. And yes, your above options are surely not included in what I would consider to be “powerful tools to arrive at true conclusions”. Ultimately I think we don’t actually disagree about anything—just another “but does it really make a sound” pitfall.
To some extent I am one such defender in the sense that I probably expect there to be a lot more of something like rationality to our values than you do. I was just saying that it’s not necessary for that to be the case. Either way the important thing is that values are in the territory where you can use rationality on them.
For reference, this point was discussed in this post:
The point at which I think rationality enters our values is when those values are self-modifying, at which point you must provide a function for updating. Perhaps we only differ on the percentage we believe to be self-modifying.
Evolution created our rationality > evolution is arational > our rationality is arational.
Genetic fallacy.
Yeah, I should really stop linking to anything written by Eliezer. Putting it in my own words invariably leads to much better communication, and everyone is quite content to tear it apart should I misunderstand the slightest nuance of “established material”.
What does the link have to do with it? There just isn’t any way to get from the two premises to the conclusion.
The link gave me a reason to think I had explained myself, when I obviously hadn’t included enough material to form a coherent comment. I know that what I’m thinking feels to be correct, and people do seem to agree with the core result, but I do not have the words to attempt to explain my thinking to you and correct it just now.