Dwelling on issues such as if a moral self-driving car may run over migrating crabs on Christmas Island, this paper is unlikely to find much appeal here. Simply put, unless something entails the total destruction of everything, locals are unlikely to take much notice. Movement building has no appeal to them. This paper will likely be dismissed as academic lobbying, without relevance to the apocalypse.
But thank you very much for sharing it. I enjoyed a quick read, and in particular seeing familiar maneuvers like:
It is important to emphasize that although we ourselves accept the principle of equal consideration of similar interests, the arguments that follow are relevant to everyone who accepts that the interests of animals matter to some extent, even if they do not think that similar human and non-human interests matter equally.
This is what I find so strong in Singer’s argumentation: graceful degradation with weakening assumptions, retaining its conclusion under all cases. No room for respectable indifference. Ethics should bite, bleed, not let go.
This point at the end is great, I think:
In fact, it is evident that some scholars hold complacent and misguided judgments about ethics, and therefore are likely to develop methods of ethics building in AI that are unfriendly to animals. For example, Wu and Lin, … proposed an approach to ethics-building in AI systems that is based on “the assumption that under normal circumstances the majority of humans do behave ethically” … Most humans, however, buy factory-farmed animal products, if they are available and affordable for them. As we have already argued, this causes unnecessary harm to animals, and so is ethically dubious. Therefore, at least with regard to the treatment of animals, we cannot accept Wu and Lin’s assumption.
This is a point that I wish I heard more often on LessWrong. Preserving the world as it is today would not be an achievement but rather a disaster. Preserving the world that most humans want is not obviously better than Clippy, either. An “aligned” AGI may be worse than useless, depending on who does the “alignment”. These are all simple observations for the utilitarian, but I frequently find the instinct for survival has a yet greater hold on people here.
Dwelling on issues such as if a moral self-driving car may run over migrating crabs on Christmas Island, this paper is unlikely to find much appeal here. Simply put, unless something entails the total destruction of everything, locals are unlikely to take much notice. Movement building has no appeal to them. This paper will likely be dismissed as academic lobbying, without relevance to the apocalypse.
But thank you very much for sharing it. I enjoyed a quick read, and in particular seeing familiar maneuvers like:
This is what I find so strong in Singer’s argumentation: graceful degradation with weakening assumptions, retaining its conclusion under all cases. No room for respectable indifference. Ethics should bite, bleed, not let go.
This point at the end is great, I think:
This is a point that I wish I heard more often on LessWrong. Preserving the world as it is today would not be an achievement but rather a disaster. Preserving the world that most humans want is not obviously better than Clippy, either. An “aligned” AGI may be worse than useless, depending on who does the “alignment”. These are all simple observations for the utilitarian, but I frequently find the instinct for survival has a yet greater hold on people here.