I finally put words to my concern with this. Hopefully it doesn’t get totally buried because I’d like to hear what people think.
It might be the case that a race of consequentialists would come up with deontological prohibitions on reflection of their imperfect hardware. But that isn’t close to the right story for how human deontological prohibitions actually came about. There was no reflection at all, cultural and biological evolution just gave us normative intuitions and cultural institutions. If things were otherwise (our ancestors were more rational) perhaps we wouldn’t have developed the instinct that the ends don’t always justify the means. But that is different from saying that a perfectly rational present day human can just ignore deontological prohibitions. Our ancestral environment could have been different in lots of different ways. Threats from carnivores and other tribes could have left us with a much strong instinct for respecting authority—such that we follow our leaders in all circumstances. We could have been stronger individually and less reliant on parents such that there was no reason for altruism to develop into as strong a force as it is. You can’t extrapolate an ideal morality from a hypothetical ancestral environment.
Non-consequentialists think the trolley problems just suggest that our instincts are not, in fact, strictly utilitarian. It doesn’t matter that an AI doesn’t have to worry about corrupted hardware, if it isn’t acting consistently with human moral intuitions it isn’t ethical (bracketing concerns about changes and variation in ethics).
Interesting point. It seems like human morality is more than just a function which maximizes human prosperity, or minimizes human deaths. It is a function which takes a LOT more into account than simply how many people die.
However, it does take into account its own biases, at least when it finds them displeasing, and corrects for them. When it thinks it has made an error, it corrects the part of the function which produced that error. For example, we might learn new things about game theory, or even switch from a deontological ethical framework to a utilitarian one.
So, the meta-level question is which of our moral intuitions are relevant to the trolley problem. (or more generally, what moral framework is correct.) If human deaths can be shown to be much more morally important than other factors, then the good of the many outweighs the good of the few. If, however, deontological ethics is correct, then the ends don’t justify the means.
I finally put words to my concern with this. Hopefully it doesn’t get totally buried because I’d like to hear what people think.
It might be the case that a race of consequentialists would come up with deontological prohibitions on reflection of their imperfect hardware. But that isn’t close to the right story for how human deontological prohibitions actually came about. There was no reflection at all, cultural and biological evolution just gave us normative intuitions and cultural institutions. If things were otherwise (our ancestors were more rational) perhaps we wouldn’t have developed the instinct that the ends don’t always justify the means. But that is different from saying that a perfectly rational present day human can just ignore deontological prohibitions. Our ancestral environment could have been different in lots of different ways. Threats from carnivores and other tribes could have left us with a much strong instinct for respecting authority—such that we follow our leaders in all circumstances. We could have been stronger individually and less reliant on parents such that there was no reason for altruism to develop into as strong a force as it is. You can’t extrapolate an ideal morality from a hypothetical ancestral environment.
Non-consequentialists think the trolley problems just suggest that our instincts are not, in fact, strictly utilitarian. It doesn’t matter that an AI doesn’t have to worry about corrupted hardware, if it isn’t acting consistently with human moral intuitions it isn’t ethical (bracketing concerns about changes and variation in ethics).
Interesting point. It seems like human morality is more than just a function which maximizes human prosperity, or minimizes human deaths. It is a function which takes a LOT more into account than simply how many people die.
However, it does take into account its own biases, at least when it finds them displeasing, and corrects for them. When it thinks it has made an error, it corrects the part of the function which produced that error. For example, we might learn new things about game theory, or even switch from a deontological ethical framework to a utilitarian one.
So, the meta-level question is which of our moral intuitions are relevant to the trolley problem. (or more generally, what moral framework is correct.) If human deaths can be shown to be much more morally important than other factors, then the good of the many outweighs the good of the few. If, however, deontological ethics is correct, then the ends don’t justify the means.