Even if you can cleanly distinguish them for a human, what’s the difference from the perspective of an effectively omniscient and omnipotent agent? (Whether or not an actual AGI would be such, a proposed morality should work in that case.)
To me, “omniscience” and “omnipotence” seem to be self-contradictory notions. Therefore, I consider it a waste of time to think about beings with such attributes.
reflects a correct instrumental judgment based on things like harms to public trust, not a terminal judgment about the badness of a death increasing in proportion to the benefit ensuing from that death or something.
OK. Do you think that if someone (e.g. an AI) kills random people for positive overall effect but manages to convince the public that they were random accidents (and therefore public trust is maintained), then it is a morally acceptable option?
To me, “omniscience” and “omnipotence” seem to be self-contradictory notions. Therefore, I consider it a waste of time to think about beings with such attributes.
OK. Do you think that if someone (e.g. an AI) kills random people for positive overall effect but manages to convince the public that they were random accidents (and therefore public trust is maintained), then it is a morally acceptable option?