The point is that the accurate statistical method is going to predict what the AI would do if it were created by a conscious human, so the decision theory cannot use the fact that the AI was created by a conscious human to discriminate between the two cases. It has equal strength beliefs in that fact in both cases, so the likelihood ratio is 1:1.
I think we’re getting to the heart of the matter here, perhaps, although I’m getting worried about all the talk about consciousness. My argument is that when you build an AI, you should allow yourself to take into account any information you know to be true (knew when you decided to be a timeless decider), even if there are good reasons that you don’t want your AI to decide timelessly and, at some points in the future, make decisions optimizing worlds it at this point ‘knows’ to be impossible. I think it’s really only a special case that if you’re conscious, and you know you wouldn’t exist anywhere in space-time as a conscious being if a certain calculation came out a certain way, then the ship has sailed, the calculation is in your “logical past”, and you should build your AI so that it can use the fact that the calculation does not come out that way.
Though it seems that if a method of prediction, without making any conscious people, accurately predicts what a person would do, because that person really would do the thing it predicted, then we are talking about p-zombies, which should not be possible.
The person who convinced me of this [unless I misunderstood them] argued that there’s no reason to assume that there can’t be calculations coarse enough that they don’t actually simulate a brain, yet specific enough to make some very good predictions about what a brain would do; I think they also argued that humans can be quite good at making predictions (though not letter-perfect predictions) about what other humans will say about subjective experience, without actually running an accurate conscious simulation of the other human.
calculations coarse enough that they don’t actually simulate a brain, yet specific enough to make some very good predictions about what a brain would do
Maybe, but when you’re making mathematical arguments, there is a qualitative difference between a deterministically accurate prediction and a merely “very good” one. In particular, for any such shortcut calculation, there is a way to build a mind such that the shortcut calculation will always give the wrong answer.
If you’re writing a thought experiment that starts with “suppose… Omega appears,” you’re doing that because you’re making an argument that relies on deterministically accurate prediction. If you find yourself having to say “never simulated as a conscious being” in the same thought experiment, then the argument has failed. If there’s an alternative argument that works with merely “very good” predictions, then by all means make it—after deleting the part about Omega.
I think we’re getting to the heart of the matter here, perhaps, although I’m getting worried about all the talk about consciousness. My argument is that when you build an AI, you should allow yourself to take into account any information you know to be true (knew when you decided to be a timeless decider), even if there are good reasons that you don’t want your AI to decide timelessly and, at some points in the future, make decisions optimizing worlds it at this point ‘knows’ to be impossible. I think it’s really only a special case that if you’re conscious, and you know you wouldn’t exist anywhere in space-time as a conscious being if a certain calculation came out a certain way, then the ship has sailed, the calculation is in your “logical past”, and you should build your AI so that it can use the fact that the calculation does not come out that way.
The person who convinced me of this [unless I misunderstood them] argued that there’s no reason to assume that there can’t be calculations coarse enough that they don’t actually simulate a brain, yet specific enough to make some very good predictions about what a brain would do; I think they also argued that humans can be quite good at making predictions (though not letter-perfect predictions) about what other humans will say about subjective experience, without actually running an accurate conscious simulation of the other human.
Maybe, but when you’re making mathematical arguments, there is a qualitative difference between a deterministically accurate prediction and a merely “very good” one. In particular, for any such shortcut calculation, there is a way to build a mind such that the shortcut calculation will always give the wrong answer.
If you’re writing a thought experiment that starts with “suppose… Omega appears,” you’re doing that because you’re making an argument that relies on deterministically accurate prediction. If you find yourself having to say “never simulated as a conscious being” in the same thought experiment, then the argument has failed. If there’s an alternative argument that works with merely “very good” predictions, then by all means make it—after deleting the part about Omega.