That might be a good argument against programming a powerful AI with such a theory, or at least expanding on the theory a lot more before you program it in. But humans’ should really know better.
Sure. Which means whether arguments of this sort are important depends a lot on whether what I’m trying to do is formalize ethics sufficiently to talk about it in English with other humans who share my basic presumptions about the world, or formalize ethics sufficiently to embed it into an automated system that doesn’t.
It sounds like you presume that the latter goal is irrelevant to this discussion. Have I got that right?
Sure. Which means whether arguments of this sort are important depends a lot on whether what I’m trying to do is formalize ethics sufficiently to talk about it in English with other humans who share my basic presumptions about the world, or formalize ethics sufficiently to embed it into an automated system that doesn’t.
It sounds like you presume that the latter goal is irrelevant to this discussion. Have I got that right?