I meant I parsed fubarobfusco’s comment differently to you, (“they want to be cannibals, therefore it’s … OK to eat them? Somehow?”), because I just assumed that obviously you should save the poor octopi (i.e. it would “bother” you in the sense of moral anguish, not “betcha didn’t think of this!”)
I was unable to empathize with this view when reading 3WC. To me the Prime Directive approach makes more sense. I was willing to accept that the Superhappies have an anti-suffering moral imperative, since they are aliens with their alien morals, but that all the humans on the IPW or even its bridge officers would be unanimous in their resolute desire to end suffering of the Babyeater children strained my suspension of disbelief more than no one accidentally or intentionally making an accurate measurement of the star drive constant.
To me the Prime Directive approach makes more sense.
As an example outside of sci-fi, if you see an abusive husband and a brainwashed battered wife, the Prime Directive tells you to ignore the whole situation, because they both think it’s more or less okay that way. Would you accept this consequence?
Would it make a moral difference if the husband and wife were members of a different culture; if they were humans living on a different planet; or if they belonged to a different sapient species?
The idea behind the PD is that for foreign enough cultures
you can’t predict the consequences of your intervention with a reasonable certainty,
you can’t trust your moral instincts to guide you to do the “right” thing
the space of all favorable outcomes is likely much smaller than that of all possible outcomes, like in the literal genie case
so you end up acting like a UFAI more likely than not.
Hence non-intervention has a higher expected utility than an intervention based on your personal deontology or virtue ethics. This is not true for sufficiently well analyzed cases, like abuse in your own society. The farther you stray from the known territory, the more chances that your intervention will be a net negative. Human history is rife with examples of this.
So, unless you can do a full consequentialist analysis of applying your morals to an alien culture, keep the hell out.
No. Babyeater lives are still important.
Funny, I parsed that as “should we then maybe be capturing them all to stop them eating each other?”
Didn’t even occur to me that was an argument about extrapolated octopus values.
It wasn’t, your first parse would be a correct moral implication. The Babyeaters must be stopped from eating themselves.
… whoops.
I meant I parsed fubarobfusco’s comment differently to you, (“they want to be cannibals, therefore it’s … OK to eat them? Somehow?”), because I just assumed that obviously you should save the poor octopi (i.e. it would “bother” you in the sense of moral anguish, not “betcha didn’t think of this!”)
I was unable to empathize with this view when reading 3WC. To me the Prime Directive approach makes more sense. I was willing to accept that the Superhappies have an anti-suffering moral imperative, since they are aliens with their alien morals, but that all the humans on the IPW or even its bridge officers would be unanimous in their resolute desire to end suffering of the Babyeater children strained my suspension of disbelief more than no one accidentally or intentionally making an accurate measurement of the star drive constant.
As an example outside of sci-fi, if you see an abusive husband and a brainwashed battered wife, the Prime Directive tells you to ignore the whole situation, because they both think it’s more or less okay that way. Would you accept this consequence?
Would it make a moral difference if the husband and wife were members of a different culture; if they were humans living on a different planet; or if they belonged to a different sapient species?
The idea behind the PD is that for foreign enough cultures
you can’t predict the consequences of your intervention with a reasonable certainty,
you can’t trust your moral instincts to guide you to do the “right” thing
the space of all favorable outcomes is likely much smaller than that of all possible outcomes, like in the literal genie case
so you end up acting like a UFAI more likely than not.
Hence non-intervention has a higher expected utility than an intervention based on your personal deontology or virtue ethics. This is not true for sufficiently well analyzed cases, like abuse in your own society. The farther you stray from the known territory, the more chances that your intervention will be a net negative. Human history is rife with examples of this.
So, unless you can do a full consequentialist analysis of applying your morals to an alien culture, keep the hell out.