The great thing is, this is ultimately an empirical question! Once we make an aligned ASI, we can run lots of simulations (carefully, to avoid inflicting suffering on innocent beings—philosophical zombie simulacra will likely be enough for this purpose) to get a sense of what the actual distribution of utility functions among ASIs in the multiverse might be like. “Moral science”...
I definitely want to say that there’s reason to believe at least some portions of the disagreement are testable, though I want curb enthusiasm by saying that we probably can’t resolve the disagreement in general, unless we can somehow either make a new universe with different physical constants or modify the physical constants of our universe.
Also, I suspect the condition below makes it significantly harder or flat out impossible to run experiments like this, at least without confounding the results and thereby making the experiment worthless.
(carefully, to avoid inflicting suffering on innocent beings—philosophical zombie simulacra will likely be enough for this purpose)
The great thing is, this is ultimately an empirical question! Once we make an aligned ASI, we can run lots of simulations (carefully, to avoid inflicting suffering on innocent beings—philosophical zombie simulacra will likely be enough for this purpose) to get a sense of what the actual distribution of utility functions among ASIs in the multiverse might be like. “Moral science”...
I definitely want to say that there’s reason to believe at least some portions of the disagreement are testable, though I want curb enthusiasm by saying that we probably can’t resolve the disagreement in general, unless we can somehow either make a new universe with different physical constants or modify the physical constants of our universe.
Also, I suspect the condition below makes it significantly harder or flat out impossible to run experiments like this, at least without confounding the results and thereby making the experiment worthless.