Where is the standardized, open-source, generally intelligent, consequentialist optimization process into which we can feed a complete morality as an XML file, to find out what that morality really recommends when applied to our world?
We have reasons to think this step will never be easy.
If you imagine that this file, like most files, is something like version 2.1.8, who is going to make the decision to make this version “count”, instead of waiting to see what comes out of the tests underway in version 2.1.9? By what moral critera will we decide upon a standard morality file? Of course, Nietzsche also foresaw this problem, and Dennett points out that it’s still a big problem despite how much we’ve learned about what humans are, but he does not proffer a solution to it. Do we just want the utility function currently in vogue to win out? When will we be satisfied we’ve got the right one?
We have reasons to think this step will never be easy. If you imagine that this file, like most files, is something like version 2.1.8, who is going to make the decision to make this version “count”, instead of waiting to see what comes out of the tests underway in version 2.1.9? By what moral critera will we decide upon a standard morality file? Of course, Nietzsche also foresaw this problem, and Dennett points out that it’s still a big problem despite how much we’ve learned about what humans are, but he does not proffer a solution to it. Do we just want the utility function currently in vogue to win out? When will we be satisfied we’ve got the right one?
Or will evolution (i.e., force) settle it?