Sheridan: “What do you want?”
Kosh: “Never ask that question!”
People are like dogs, they just sort of do things arbitrarily. If you look beyond the smoke and mirrors of your surface preferences, all you’re going to find behind them is more smoke and mirrors. A wise man once suggested to me that I should just treat my brain as an oracle for preferences—give it as good a data as I can, and as much processing power as it needs, and just take what it spits out as gospel, rather than seeking the underlying principles.
Not necessarily, your brain might have this annoying property that understanding a moral principal changes it in such a way that it no longer cares about it.
Not necessarily, your brain might have this annoying property that understanding a moral principal changes it in such a way that it no longer cares about it.
This seems relatively unlikely. I have yet to meet a person of whom I would predict that once they have a firm understanding of morality they would not have a moral aversion to, say, the possibility of raping and killing their entire family. I would also postulate that of those people who would so easily stop caring about such moral considerations when granted knowledge few of them would have a significant aversion to something as instinctively morally minor as understanding gaining.
Mind you, the fact that people centered a whole morality myth around an original sin of seeking the knowledge of good and evil suggest that some people are scared of that kind of understanding!
Can you unpack your grounds for trusting your predictions about other people’s reflectively consistent moral intuitions, whether about this in particular or more generally?
As I think about it myself, I conclude that mostly what I have is a desire to believe that, as believing it makes me far less inclined to hide under the table at Thanksgiving. But I’ve certainly had the experience of having moral beliefs (or at least, things that I would have described as moral beliefs) dissolve in the face of greater understanding, so my thinking about it is muddled enough that my predictions have wide error bars.
Certainly, I would agree that someone with sufficient understanding of the world will generally choose not to rape and kill their entire family when there’s no net benefit to be gained by doing so, but that’s not quite the same thing.
Can you unpack your grounds for trusting your predictions about other people’s reflectively consistent moral intuitions, whether about this in particular or more generally?
That isn’t a position I have or have presented so it would not make sense for me to try to unpack the null. In particular I have huge amount of doubt regarding reflectively consistent intuitions—even my own. (ie. This is a minor and non-malicious “When will you stop beating your wife?” problem.)
My prediction was in regards to counterfactual experience of aversive moral feelings in a provided extreme example of those people that I know. I expect them to get squicked out at the possibility of raping and killing their entire family even if they have understanding of the moral principles.
(There may be an exception or two among people I suspect have high functioning sociopathic tendencies but these come in under the second observation regarding them not caring anyway.)
Ah, I see. Sorry, I was understanding “firm understanding of morality” in a more from-the-ground-up, nonhuman sense than I think you meant it… I think I’ve been overtrained on FAI conversations. Sure, an understanding of morality in a normal, real-world sense doesn’t preclude—nor even noticeably inhibit—moral aversions at the level of “don’t rape and kill my family”; absolutely agreed.
Sheridan: “What do you want?” Kosh: “Never ask that question!”
People are like dogs, they just sort of do things arbitrarily. If you look beyond the smoke and mirrors of your surface preferences, all you’re going to find behind them is more smoke and mirrors. A wise man once suggested to me that I should just treat my brain as an oracle for preferences—give it as good a data as I can, and as much processing power as it needs, and just take what it spits out as gospel, rather than seeking the underlying principles.
That sounds deeply wise, but I think we can do better.
But don’t you want to understand the underlying principles?
Not necessarily, your brain might have this annoying property that understanding a moral principal changes it in such a way that it no longer cares about it.
This seems relatively unlikely. I have yet to meet a person of whom I would predict that once they have a firm understanding of morality they would not have a moral aversion to, say, the possibility of raping and killing their entire family. I would also postulate that of those people who would so easily stop caring about such moral considerations when granted knowledge few of them would have a significant aversion to something as instinctively morally minor as understanding gaining.
Mind you, the fact that people centered a whole morality myth around an original sin of seeking the knowledge of good and evil suggest that some people are scared of that kind of understanding!
Can you unpack your grounds for trusting your predictions about other people’s reflectively consistent moral intuitions, whether about this in particular or more generally?
As I think about it myself, I conclude that mostly what I have is a desire to believe that, as believing it makes me far less inclined to hide under the table at Thanksgiving. But I’ve certainly had the experience of having moral beliefs (or at least, things that I would have described as moral beliefs) dissolve in the face of greater understanding, so my thinking about it is muddled enough that my predictions have wide error bars.
Certainly, I would agree that someone with sufficient understanding of the world will generally choose not to rape and kill their entire family when there’s no net benefit to be gained by doing so, but that’s not quite the same thing.
That isn’t a position I have or have presented so it would not make sense for me to try to unpack the null. In particular I have huge amount of doubt regarding reflectively consistent intuitions—even my own. (ie. This is a minor and non-malicious “When will you stop beating your wife?” problem.)
My prediction was in regards to counterfactual experience of aversive moral feelings in a provided extreme example of those people that I know. I expect them to get squicked out at the possibility of raping and killing their entire family even if they have understanding of the moral principles.
(There may be an exception or two among people I suspect have high functioning sociopathic tendencies but these come in under the second observation regarding them not caring anyway.)
Ah, I see.
Sorry, I was understanding “firm understanding of morality” in a more from-the-ground-up, nonhuman sense than I think you meant it… I think I’ve been overtrained on FAI conversations.
Sure, an understanding of morality in a normal, real-world sense doesn’t preclude—nor even noticeably inhibit—moral aversions at the level of “don’t rape and kill my family”; absolutely agreed.
well played.