I imagine that the moral intuitions in my brain come from a special black box within it, a “morality core” whose outputs I cannot easily change.
What does your “morality core” tell you when you ask it “How exactly should I rearrange the universe (or a significant part of it), if I had the power to rearrange it any way I want?”
Mine seems to say “I don’t know, but if you ever get the opportunity, be sure to do it right, or at least get what you really want.” So here I am, trying to figure out what that could mean. :)
Mine says the same thing. But I guess it’s due to insufficient imagination, not insufficient understanding of (meta-)morality. A village girl coming to New York for the first time may have a vague preconceived idea of what kind of awesome dresses she’s gonna buy, but before she buys one, she needs to do some actual shopping :-) It doesn’t take a genius to imagine some pretty good outcomes for humanity. But finding the best one requires imagination.
Are you implying that your “morality core” can tell you which of two arbitrary scenarios is better, as long as they are both presented in sufficient detail (so as to not require imagination)? What about all of the ethical dilemmas we have been discussing over the past several years?
I think the village girl in New York example can actually be understood a step farther. She doesn’t just need to look at the dresses—the catalogs in her village show what they look like and cost. She also needs to see what people in New York actually dress like, and how the dresses work for them on the street.
Just so, many people have presented ethical dilemmas that are not part of our everyday experiences. If we have a useful morality core, then it (like most other senses or heuristics) is useful only in the areas in which it’s been trained. The village girl needs the street experience in NYC to make good purchases. So the two arbitrary sequences would have to be similar enough to the intuiter’s actual experiences to be accurately compared to one another.
I don’t claim that. “Arbitrary scenarios” is way too wide a class. It’s like asking a picture classifier to confidently detect tanks or their absence in arbitrary pictures, even very noisy and confusing ones. (Sorry for the analogy overload!) I only claim that, given the power to rearrange the universe, I would rearrange it into something I would confidently consider “pretty good”.
What does your “morality core” tell you when you ask it “How exactly should I rearrange the universe (or a significant part of it), if I had the power to rearrange it any way I want?”
Mine seems to say “I don’t know, but if you ever get the opportunity, be sure to do it right, or at least get what you really want.” So here I am, trying to figure out what that could mean. :)
Mine says the same thing. But I guess it’s due to insufficient imagination, not insufficient understanding of (meta-)morality. A village girl coming to New York for the first time may have a vague preconceived idea of what kind of awesome dresses she’s gonna buy, but before she buys one, she needs to do some actual shopping :-) It doesn’t take a genius to imagine some pretty good outcomes for humanity. But finding the best one requires imagination.
Are you implying that your “morality core” can tell you which of two arbitrary scenarios is better, as long as they are both presented in sufficient detail (so as to not require imagination)? What about all of the ethical dilemmas we have been discussing over the past several years?
I think the village girl in New York example can actually be understood a step farther. She doesn’t just need to look at the dresses—the catalogs in her village show what they look like and cost. She also needs to see what people in New York actually dress like, and how the dresses work for them on the street.
Just so, many people have presented ethical dilemmas that are not part of our everyday experiences. If we have a useful morality core, then it (like most other senses or heuristics) is useful only in the areas in which it’s been trained. The village girl needs the street experience in NYC to make good purchases. So the two arbitrary sequences would have to be similar enough to the intuiter’s actual experiences to be accurately compared to one another.
I don’t claim that. “Arbitrary scenarios” is way too wide a class. It’s like asking a picture classifier to confidently detect tanks or their absence in arbitrary pictures, even very noisy and confusing ones. (Sorry for the analogy overload!) I only claim that, given the power to rearrange the universe, I would rearrange it into something I would confidently consider “pretty good”.