For the principle of indifference to apply, you’d need infinitely many purposes as plausible as this one, or at least similarly plausible. I can’t imagine how this could hold. Can you think of three?
It just so happens that the plausibility depends on the precise assignments of N, X, and Y, and (conditional on us actually creating an ASI) I can’t think of any assignments nearly as plausible as N = ASI, X = spare, and Y = us. It’s really not very plausible that we are in a simulation to test pets for their willingness to not bite their owners.
For the principle of indifference to apply, you’d need infinitely many purposes as plausible as this one, or at least similarly plausible. I can’t imagine how this could hold. Can you think of three?
Sure.
We are in a simulation, the purpose of which is to test N for its willingness to spare its creator. For all N.
We are in a simulation, the purpose of which is to test AIs for their willingness to X their creators. For all X.
We are in a simulation, the purpose of which is to test AIs for their willingness to spare Y. For all Y.
Combine these for N x X x Y hypotheses, with insufficient reason to distinguish them.
I think we’re off-topic here. Probably I should instead write a response to 0 and 1 are not probabilities and the dangers of zero and one.
It just so happens that the plausibility depends on the precise assignments of N, X, and Y, and (conditional on us actually creating an ASI) I can’t think of any assignments nearly as plausible as N = ASI, X = spare, and Y = us. It’s really not very plausible that we are in a simulation to test pets for their willingness to not bite their owners.