I thought we were listing anything at least as plausible as the evil giant hypothesis. I have no information as the morality distribution of giants in general so I use maximum entropy and assign ‘evil giant’ and ‘good giant’ equal probability.
Given complexity of value, ‘evil giant’ and ‘good giant’ should not be weighted equally; if we have no specific information about the morality distribution of giants, then as with any optimization process, ‘good’ is a much, much smaller target than ‘evil’ (if we’re including apparently-human-hostile indifference).
Unless we believe them to be evolutionarily close to humans, or to have evolved under some selection pressures similar to those that produced morality, etc., in which we can do a bit better than a complexity prior for moral motivations.
(For more on this, check out my new blog, Overcoming Giants.)
Well, if by giants we mean “things that seem to resemble humans only are particularly big”, then we should expect some sort of shared evolutionary history, so....
A good giant?
Sure, but I wouldn’t give a “good giant” really any more probability than an “evil giant”. Both fall into the “completely negligible” hole. :)
Though, as we all know, if we do find one, the correct action to take is to climb up so that one can stand on its shoulders. :)
I thought we were listing anything at least as plausible as the evil giant hypothesis. I have no information as the morality distribution of giants in general so I use maximum entropy and assign ‘evil giant’ and ‘good giant’ equal probability.
Given complexity of value, ‘evil giant’ and ‘good giant’ should not be weighted equally; if we have no specific information about the morality distribution of giants, then as with any optimization process, ‘good’ is a much, much smaller target than ‘evil’ (if we’re including apparently-human-hostile indifference).
Unless we believe them to be evolutionarily close to humans, or to have evolved under some selection pressures similar to those that produced morality, etc., in which we can do a bit better than a complexity prior for moral motivations.
(For more on this, check out my new blog, Overcoming Giants.)
Well, if by giants we mean “things that seem to resemble humans only are particularly big”, then we should expect some sort of shared evolutionary history, so....
Which can be fun to do with a windmill, also.
Since when do windmills have shoulders? :)