Something going from 0 to 10^-20 is behaving pretty close to continuously in one sense. It is clear that there are some configurations of matter I don’t care about at all (like a paperclip), while I do care about other configurations (like twelve-year-old human children), so it is elementary that at some point my utility function must go from 0 to nonzero. The derivative, the second derivative, or even the function itself could easily be discontinuous at this point.
It is clear that there are some configurations of matter I don’t care about at all (like a paperclip), while I do care about other configurations (like twelve-year-old human children), so it is elementary that at some point my utility function must go from 0 to nonzero.
And … it isn’t clear that there are some configurations you care for … a bit? Sparrows being tortured and so on? You don’t care more about dogs than insects and more for chimpanzees than dogs?
(I mean, most cultures have a Great Chain Of Being or whatever, so surely I haven’t gone dreadfully awry in my introspection …)
No, but I strongly suspect that all Earthly life without frontal cortex would be regarded by my idealized morals as a more complicated paperclip. There may be exceptions and I have heard rumors that octopi pass the mirror test, and I will not be eating any octopus meat until that is resolved, because even in a world where I eat meat because optimizing my diet is more important and my civilization lets me get away with it, I do not eat anything that recognizes itself in a mirror. So a spider is a definite no, a chimpanzee is an extremely probable yes, a day-old human infant is an extremely probable no but there are non-sentience-related causes for me to care in this case, and pigs I am genuinely unsure of.
To be clear, I am unsure if pigs are objects of value, which incorporates both empirical uncertainty about their degree of reflectivity, philosophical uncertainty about the precise relation of reflectivity to degrees of consciousness, and ethical uncertainty about how much my idealized morals would care about various degrees of consciousness to the extent I can imagine that coherently. I can imagine that there’s a sharp line of sentience which humans are over and pigs are under, and imagine that my idealized caring would drop to immediately zero for anything under the line, but my subjective probability for both of these being simultaneously true is under 50% though they are not independent.
However it is plausible to me that I would care exactly zero about a pig getting a dust speck in the eye… or not.
I meant I parsed fubarobfusco’s comment differently to you, (“they want to be cannibals, therefore it’s … OK to eat them? Somehow?”), because I just assumed that obviously you should save the poor octopi (i.e. it would “bother” you in the sense of moral anguish, not “betcha didn’t think of this!”)
I was unable to empathize with this view when reading 3WC. To me the Prime Directive approach makes more sense. I was willing to accept that the Superhappies have an anti-suffering moral imperative, since they are aliens with their alien morals, but that all the humans on the IPW or even its bridge officers would be unanimous in their resolute desire to end suffering of the Babyeater children strained my suspension of disbelief more than no one accidentally or intentionally making an accurate measurement of the star drive constant.
To me the Prime Directive approach makes more sense.
As an example outside of sci-fi, if you see an abusive husband and a brainwashed battered wife, the Prime Directive tells you to ignore the whole situation, because they both think it’s more or less okay that way. Would you accept this consequence?
Would it make a moral difference if the husband and wife were members of a different culture; if they were humans living on a different planet; or if they belonged to a different sapient species?
The idea behind the PD is that for foreign enough cultures
you can’t predict the consequences of your intervention with a reasonable certainty,
you can’t trust your moral instincts to guide you to do the “right” thing
the space of all favorable outcomes is likely much smaller than that of all possible outcomes, like in the literal genie case
so you end up acting like a UFAI more likely than not.
Hence non-intervention has a higher expected utility than an intervention based on your personal deontology or virtue ethics. This is not true for sufficiently well analyzed cases, like abuse in your own society. The farther you stray from the known territory, the more chances that your intervention will be a net negative. Human history is rife with examples of this.
So, unless you can do a full consequentialist analysis of applying your morals to an alien culture, keep the hell out.
I do not eat anything that recognizes itself in a mirror.
Assuming pigs were objects of value, would that make it morally wrong to eat them? Unlike octopi, most pigs exist because humans plan on eating them, so if a lot of humans stopped eating pigs, there would be less pigs, and the life of the average pig might not be much better.
This needs a distinction between the value of creating pigs, existence of living pigs, and killing of pigs. If existing pigs are objects of value, but the negative value of killing them (of the event itself, not of the change in value between a living pig and a dead one) doesn’t outweigh the value of their preceding existence, then creating and killing as many pigs as possible has positive value (relative to noise; with opportunity cost the value is probably negative, there are better things to do with the same resources; by the same token, post-FAI the value of “classical” human lives is also negative, as it’ll be possible to make significant improvements).
Something going from 0 to 10^-20 is behaving pretty close to continuously in one sense. It is clear that there are some configurations of matter I don’t care about at all (like a paperclip), while I do care about other configurations (like twelve-year-old human children), so it is elementary that at some point my utility function must go from 0 to nonzero. The derivative, the second derivative, or even the function itself could easily be discontinuous at this point.
But needn’t be! See for example f(x) = exp(-1/x) (x > 0), 0 (x ≤ 0).
Wikipedia has an analysis.
(Of course, the space of objects isn’t exactly isomorphic to the real line, but it’s still a neat example.)
Agreed, but it is not obvious to me that my utility function needs to be differentiable at that point.
I dispute that; the paperclip is almost certainly either more or less likely to become a Boltzmann brain than an equivalent volume of vacuum.
And … it isn’t clear that there are some configurations you care for … a bit? Sparrows being tortured and so on? You don’t care more about dogs than insects and more for chimpanzees than dogs?
(I mean, most cultures have a Great Chain Of Being or whatever, so surely I haven’t gone dreadfully awry in my introspection …)
This is not incompatible with what I just said. It goes from 0 to tiny somewhere, not from 0 to 12-year-old.
Can you bracket this boundary reasonably sharply? Say, mosquito: no, butterfly: yes?
No, but I strongly suspect that all Earthly life without frontal cortex would be regarded by my idealized morals as a more complicated paperclip. There may be exceptions and I have heard rumors that octopi pass the mirror test, and I will not be eating any octopus meat until that is resolved, because even in a world where I eat meat because optimizing my diet is more important and my civilization lets me get away with it, I do not eat anything that recognizes itself in a mirror. So a spider is a definite no, a chimpanzee is an extremely probable yes, a day-old human infant is an extremely probable no but there are non-sentience-related causes for me to care in this case, and pigs I am genuinely unsure of.
To be clear, I am unsure if pigs are objects of value, which incorporates both empirical uncertainty about their degree of reflectivity, philosophical uncertainty about the precise relation of reflectivity to degrees of consciousness, and ethical uncertainty about how much my idealized morals would care about various degrees of consciousness to the extent I can imagine that coherently. I can imagine that there’s a sharp line of sentience which humans are over and pigs are under, and imagine that my idealized caring would drop to immediately zero for anything under the line, but my subjective probability for both of these being simultaneously true is under 50% though they are not independent.
However it is plausible to me that I would care exactly zero about a pig getting a dust speck in the eye… or not.
Does it matter to you that octopuses are quite commonly cannibalistic?
No. Babyeater lives are still important.
Funny, I parsed that as “should we then maybe be capturing them all to stop them eating each other?”
Didn’t even occur to me that was an argument about extrapolated octopus values.
It wasn’t, your first parse would be a correct moral implication. The Babyeaters must be stopped from eating themselves.
… whoops.
I meant I parsed fubarobfusco’s comment differently to you, (“they want to be cannibals, therefore it’s … OK to eat them? Somehow?”), because I just assumed that obviously you should save the poor octopi (i.e. it would “bother” you in the sense of moral anguish, not “betcha didn’t think of this!”)
I was unable to empathize with this view when reading 3WC. To me the Prime Directive approach makes more sense. I was willing to accept that the Superhappies have an anti-suffering moral imperative, since they are aliens with their alien morals, but that all the humans on the IPW or even its bridge officers would be unanimous in their resolute desire to end suffering of the Babyeater children strained my suspension of disbelief more than no one accidentally or intentionally making an accurate measurement of the star drive constant.
As an example outside of sci-fi, if you see an abusive husband and a brainwashed battered wife, the Prime Directive tells you to ignore the whole situation, because they both think it’s more or less okay that way. Would you accept this consequence?
Would it make a moral difference if the husband and wife were members of a different culture; if they were humans living on a different planet; or if they belonged to a different sapient species?
The idea behind the PD is that for foreign enough cultures
you can’t predict the consequences of your intervention with a reasonable certainty,
you can’t trust your moral instincts to guide you to do the “right” thing
the space of all favorable outcomes is likely much smaller than that of all possible outcomes, like in the literal genie case
so you end up acting like a UFAI more likely than not.
Hence non-intervention has a higher expected utility than an intervention based on your personal deontology or virtue ethics. This is not true for sufficiently well analyzed cases, like abuse in your own society. The farther you stray from the known territory, the more chances that your intervention will be a net negative. Human history is rife with examples of this.
So, unless you can do a full consequentialist analysis of applying your morals to an alien culture, keep the hell out.
Assuming pigs were objects of value, would that make it morally wrong to eat them? Unlike octopi, most pigs exist because humans plan on eating them, so if a lot of humans stopped eating pigs, there would be less pigs, and the life of the average pig might not be much better.
(this is not a rhetorical question)
Yes. If pigs were objects of value, it would be morally wrong to eat them, and indeed the moral thing to do would be to not create them.
This needs a distinction between the value of creating pigs, existence of living pigs, and killing of pigs. If existing pigs are objects of value, but the negative value of killing them (of the event itself, not of the change in value between a living pig and a dead one) doesn’t outweigh the value of their preceding existence, then creating and killing as many pigs as possible has positive value (relative to noise; with opportunity cost the value is probably negative, there are better things to do with the same resources; by the same token, post-FAI the value of “classical” human lives is also negative, as it’ll be possible to make significant improvements).
I don’t think it’s morally wrong to eat people, if they happen to be in irrecoverable states