I thought we were listing anything at least as plausible as the evil giant hypothesis. I have no information as the morality distribution of giants in general so I use maximum entropy and assign ‘evil giant’ and ‘good giant’ equal probability.
Given complexity of value, ‘evil giant’ and ‘good giant’ should not be weighted equally; if we have no specific information about the morality distribution of giants, then as with any optimization process, ‘good’ is a much, much smaller target than ‘evil’ (if we’re including apparently-human-hostile indifference).
Unless we believe them to be evolutionarily close to humans, or to have evolved under some selection pressures similar to those that produced morality, etc., in which we can do a bit better than a complexity prior for moral motivations.
(For more on this, check out my new blog, Overcoming Giants.)
Well, if by giants we mean “things that seem to resemble humans only are particularly big”, then we should expect some sort of shared evolutionary history, so....
I thought we were listing anything at least as plausible as the evil giant hypothesis. I have no information as the morality distribution of giants in general so I use maximum entropy and assign ‘evil giant’ and ‘good giant’ equal probability.
Given complexity of value, ‘evil giant’ and ‘good giant’ should not be weighted equally; if we have no specific information about the morality distribution of giants, then as with any optimization process, ‘good’ is a much, much smaller target than ‘evil’ (if we’re including apparently-human-hostile indifference).
Unless we believe them to be evolutionarily close to humans, or to have evolved under some selection pressures similar to those that produced morality, etc., in which we can do a bit better than a complexity prior for moral motivations.
(For more on this, check out my new blog, Overcoming Giants.)
Well, if by giants we mean “things that seem to resemble humans only are particularly big”, then we should expect some sort of shared evolutionary history, so....