Mine says the same thing. But I guess it’s due to insufficient imagination, not insufficient understanding of (meta-)morality. A village girl coming to New York for the first time may have a vague preconceived idea of what kind of awesome dresses she’s gonna buy, but before she buys one, she needs to do some actual shopping :-) It doesn’t take a genius to imagine some pretty good outcomes for humanity. But finding the best one requires imagination.
Are you implying that your “morality core” can tell you which of two arbitrary scenarios is better, as long as they are both presented in sufficient detail (so as to not require imagination)? What about all of the ethical dilemmas we have been discussing over the past several years?
I think the village girl in New York example can actually be understood a step farther. She doesn’t just need to look at the dresses—the catalogs in her village show what they look like and cost. She also needs to see what people in New York actually dress like, and how the dresses work for them on the street.
Just so, many people have presented ethical dilemmas that are not part of our everyday experiences. If we have a useful morality core, then it (like most other senses or heuristics) is useful only in the areas in which it’s been trained. The village girl needs the street experience in NYC to make good purchases. So the two arbitrary sequences would have to be similar enough to the intuiter’s actual experiences to be accurately compared to one another.
I don’t claim that. “Arbitrary scenarios” is way too wide a class. It’s like asking a picture classifier to confidently detect tanks or their absence in arbitrary pictures, even very noisy and confusing ones. (Sorry for the analogy overload!) I only claim that, given the power to rearrange the universe, I would rearrange it into something I would confidently consider “pretty good”.
Mine says the same thing. But I guess it’s due to insufficient imagination, not insufficient understanding of (meta-)morality. A village girl coming to New York for the first time may have a vague preconceived idea of what kind of awesome dresses she’s gonna buy, but before she buys one, she needs to do some actual shopping :-) It doesn’t take a genius to imagine some pretty good outcomes for humanity. But finding the best one requires imagination.
Are you implying that your “morality core” can tell you which of two arbitrary scenarios is better, as long as they are both presented in sufficient detail (so as to not require imagination)? What about all of the ethical dilemmas we have been discussing over the past several years?
I think the village girl in New York example can actually be understood a step farther. She doesn’t just need to look at the dresses—the catalogs in her village show what they look like and cost. She also needs to see what people in New York actually dress like, and how the dresses work for them on the street.
Just so, many people have presented ethical dilemmas that are not part of our everyday experiences. If we have a useful morality core, then it (like most other senses or heuristics) is useful only in the areas in which it’s been trained. The village girl needs the street experience in NYC to make good purchases. So the two arbitrary sequences would have to be similar enough to the intuiter’s actual experiences to be accurately compared to one another.
I don’t claim that. “Arbitrary scenarios” is way too wide a class. It’s like asking a picture classifier to confidently detect tanks or their absence in arbitrary pictures, even very noisy and confusing ones. (Sorry for the analogy overload!) I only claim that, given the power to rearrange the universe, I would rearrange it into something I would confidently consider “pretty good”.