If a UDT agent is presented with a counterfactual mugging based on uncertainty of a logical proposition, it should attempt to resolve the logical uncertainty and act accordingly.
Ok, the intuition pump is problematic in that not only do you know what the first digit of pi is, it is also easy for the AI to calculate. Can you imagine a least convenient possible world in which there is a logical fact for Omega to use that you know the answer to, but that is not trivial for the AI to calculate? Would you agree that it makes sense to enter it into the AI’s prior?
My point was that since you’re consciously creating the AI, you know that Omega didn’t destroy Earth, so you know that the umpteenth digit of pi is odd, and you should program that into the AI. (A’ight, perhaps the digit is in fact even and you’re conscious only because you’re a Boltzmann brain who’s about to be destroyed, but let’s assume that case away.)
since you’re consciously creating the AI, you know that Omega didn’t destroy Earth
Omega’s unconscious model of you also ‘knows’ this. The abstract computation that is your decision process doesn’t have direct knowledge of whether or not it’s instantiated by anything ‘real’ or ‘conscious’ (whatever those mean).
My intent when I said “never instantiated as a conscious being” was that Omega used some accurate statistical method of prediction that did not include a whole simulation of what you are experiencing right now. I agree that I can’t resolve the confusion about what “conscious” means, but when considering Omega problems, I don’t think it’s going too far to postulate that Omega can use statistical models that predict very accurately what I’ll do without that prediction leading to a detailed simulation of me.
Ok, I can’t rigorously justify a fundamental difference between “a brain being simulated (and thus experiencing things)” and “a brain not actually simulated (and therefore not experiencing things),” so perhaps I can’t logically conclude that Omega didn’t destroy Earth even if its prediction algorithm doesn’t simulate me. But it still seems important to me to work well if there is such a difference (if there isn’t, why should I care whether Omega “really” destroys Earth “a million years before my subjective now”, if I go on experiencing my life the way I “only seem” to experience it now?)
My intent when I said “never instantiated as a conscious being” was that Omega used some accurate statistical method of prediction that did not include a whole simulation of what you are experiencing right now.
The point is that the accurate statistical method is going to predict what the AI would do if it were created by a conscious human, so the decision theory cannot use the fact that the AI was created by a conscious human to discriminate between the two cases. It has equal strength beliefs in that fact in both cases, so the likelihood ratio is 1:1.
(Though it seems that if a method of prediction, without making any conscious people, accurately predicts what a person would do, because that person really would do the thing it predicted, then we are talking about p-zombies, which should not be possible. Perhaps this method can predict what sort of AI we would build, and what that AI would do, but not what we would say about subjective experience, though I would expect that subjective experience is part of the causal chain that causes us to build a particular AI, so that seems unlikely.)
The point is that the accurate statistical method is going to predict what the AI would do if it were created by a conscious human, so the decision theory cannot use the fact that the AI was created by a conscious human to discriminate between the two cases. It has equal strength beliefs in that fact in both cases, so the likelihood ratio is 1:1.
I think we’re getting to the heart of the matter here, perhaps, although I’m getting worried about all the talk about consciousness. My argument is that when you build an AI, you should allow yourself to take into account any information you know to be true (knew when you decided to be a timeless decider), even if there are good reasons that you don’t want your AI to decide timelessly and, at some points in the future, make decisions optimizing worlds it at this point ‘knows’ to be impossible. I think it’s really only a special case that if you’re conscious, and you know you wouldn’t exist anywhere in space-time as a conscious being if a certain calculation came out a certain way, then the ship has sailed, the calculation is in your “logical past”, and you should build your AI so that it can use the fact that the calculation does not come out that way.
Though it seems that if a method of prediction, without making any conscious people, accurately predicts what a person would do, because that person really would do the thing it predicted, then we are talking about p-zombies, which should not be possible.
The person who convinced me of this [unless I misunderstood them] argued that there’s no reason to assume that there can’t be calculations coarse enough that they don’t actually simulate a brain, yet specific enough to make some very good predictions about what a brain would do; I think they also argued that humans can be quite good at making predictions (though not letter-perfect predictions) about what other humans will say about subjective experience, without actually running an accurate conscious simulation of the other human.
calculations coarse enough that they don’t actually simulate a brain, yet specific enough to make some very good predictions about what a brain would do
Maybe, but when you’re making mathematical arguments, there is a qualitative difference between a deterministically accurate prediction and a merely “very good” one. In particular, for any such shortcut calculation, there is a way to build a mind such that the shortcut calculation will always give the wrong answer.
If you’re writing a thought experiment that starts with “suppose… Omega appears,” you’re doing that because you’re making an argument that relies on deterministically accurate prediction. If you find yourself having to say “never simulated as a conscious being” in the same thought experiment, then the argument has failed. If there’s an alternative argument that works with merely “very good” predictions, then by all means make it—after deleting the part about Omega.
Ok, the intuition pump is problematic in that not only do you know what the first digit of pi is, it is also easy for the AI to calculate.
Perhaps I wasn’t clear. I meant that Omega does not actually tell you what logical proposition it used. The phrase “some logical proposition” is literally what Omega says, it is not a placeholder for something more specific. All you have to go on is that of the things that Omega believes with probability .5, on average half of them are actually true.
Can you imagine a least convenient possible world in which there is a logical fact for Omega to use that you know the answer to, but that is not trivial for the AI to calculate? Would you agree that it makes sense to enter it into the AI’s prior?
No. A properly designed AGI should be able to figure out any logical fact that I know.
My point was that …
My point was that one particular argument you made does not actually support your point.
“After thinking about it for a sufficiently long time, the AI at some time or other will judge this statement to be false.”
This might very well be a logical fact because it’s truth or falsehood can be determined from the AI’s programming, something quite logically determinate. But it is quite difficult for the AI to discover the truth of the matter.
My point was that one particular argument you made does not actually support your point.
Ok, fair enough, I guess. I still think you’re not assuming the least convenient possible world; perhaps some astrophysical observation of yours that isn’t available to the AI allows you to have high confidence about some digits of Chaitin’s constant. But that’s much more subtle than what I had in mind when writing the post, so thanks for pointing that out.
Perhaps I wasn’t clear. I meant that Omega does not actually tell you what logical proposition it used.
Ok, I misunderstood, sorry. I don’t understand the point you were making there, then. My intent was to use a digit large enough that the AI cannot compute it in the time Omega is allowing it; I don’t see any difference between your version and mine, then?
perhaps some astrophysical observation of yours that isn’t available to the AI
The best approach I know now for constructing an FAI is CEV. An AI that can pull that off should be able to also access any astrophysical data I possess. I am not sure what the point would be if it didn’t. The expected utility of programming the FAI to be able to figure this stuff out is much higher than building it a giant lookup table of stuff I know, unless I had magical advance knowledge that some particular fact that I know will be incredibly useful to the FAI.
My intent was to use a digit large enough that the AI cannot compute it in the time Omega is allowing it; I don’t see any difference between your version and mine, then?
Yes, there is no difference, given that you have a sufficiently large digit. The reason I brought up my version is so that you don’t have to worry about computing the truth value of the logical proposition as a strategy, as you don’t even know which logical proposition was used.
Ok, the intuition pump is problematic in that not only do you know what the first digit of pi is, it is also easy for the AI to calculate. Can you imagine a least convenient possible world in which there is a logical fact for Omega to use that you know the answer to, but that is not trivial for the AI to calculate? Would you agree that it makes sense to enter it into the AI’s prior?
My point was that since you’re consciously creating the AI, you know that Omega didn’t destroy Earth, so you know that the umpteenth digit of pi is odd, and you should program that into the AI. (A’ight, perhaps the digit is in fact even and you’re conscious only because you’re a Boltzmann brain who’s about to be destroyed, but let’s assume that case away.)
Omega’s unconscious model of you also ‘knows’ this. The abstract computation that is your decision process doesn’t have direct knowledge of whether or not it’s instantiated by anything ‘real’ or ‘conscious’ (whatever those mean).
My intent when I said “never instantiated as a conscious being” was that Omega used some accurate statistical method of prediction that did not include a whole simulation of what you are experiencing right now. I agree that I can’t resolve the confusion about what “conscious” means, but when considering Omega problems, I don’t think it’s going too far to postulate that Omega can use statistical models that predict very accurately what I’ll do without that prediction leading to a detailed simulation of me.
Ok, I can’t rigorously justify a fundamental difference between “a brain being simulated (and thus experiencing things)” and “a brain not actually simulated (and therefore not experiencing things),” so perhaps I can’t logically conclude that Omega didn’t destroy Earth even if its prediction algorithm doesn’t simulate me. But it still seems important to me to work well if there is such a difference (if there isn’t, why should I care whether Omega “really” destroys Earth “a million years before my subjective now”, if I go on experiencing my life the way I “only seem” to experience it now?)
The point is that the accurate statistical method is going to predict what the AI would do if it were created by a conscious human, so the decision theory cannot use the fact that the AI was created by a conscious human to discriminate between the two cases. It has equal strength beliefs in that fact in both cases, so the likelihood ratio is 1:1.
(Though it seems that if a method of prediction, without making any conscious people, accurately predicts what a person would do, because that person really would do the thing it predicted, then we are talking about p-zombies, which should not be possible. Perhaps this method can predict what sort of AI we would build, and what that AI would do, but not what we would say about subjective experience, though I would expect that subjective experience is part of the causal chain that causes us to build a particular AI, so that seems unlikely.)
I think we’re getting to the heart of the matter here, perhaps, although I’m getting worried about all the talk about consciousness. My argument is that when you build an AI, you should allow yourself to take into account any information you know to be true (knew when you decided to be a timeless decider), even if there are good reasons that you don’t want your AI to decide timelessly and, at some points in the future, make decisions optimizing worlds it at this point ‘knows’ to be impossible. I think it’s really only a special case that if you’re conscious, and you know you wouldn’t exist anywhere in space-time as a conscious being if a certain calculation came out a certain way, then the ship has sailed, the calculation is in your “logical past”, and you should build your AI so that it can use the fact that the calculation does not come out that way.
The person who convinced me of this [unless I misunderstood them] argued that there’s no reason to assume that there can’t be calculations coarse enough that they don’t actually simulate a brain, yet specific enough to make some very good predictions about what a brain would do; I think they also argued that humans can be quite good at making predictions (though not letter-perfect predictions) about what other humans will say about subjective experience, without actually running an accurate conscious simulation of the other human.
Maybe, but when you’re making mathematical arguments, there is a qualitative difference between a deterministically accurate prediction and a merely “very good” one. In particular, for any such shortcut calculation, there is a way to build a mind such that the shortcut calculation will always give the wrong answer.
If you’re writing a thought experiment that starts with “suppose… Omega appears,” you’re doing that because you’re making an argument that relies on deterministically accurate prediction. If you find yourself having to say “never simulated as a conscious being” in the same thought experiment, then the argument has failed. If there’s an alternative argument that works with merely “very good” predictions, then by all means make it—after deleting the part about Omega.
Perhaps I wasn’t clear. I meant that Omega does not actually tell you what logical proposition it used. The phrase “some logical proposition” is literally what Omega says, it is not a placeholder for something more specific. All you have to go on is that of the things that Omega believes with probability .5, on average half of them are actually true.
No. A properly designed AGI should be able to figure out any logical fact that I know.
My point was that one particular argument you made does not actually support your point.
I’ve given such a logical fact before.
“After thinking about it for a sufficiently long time, the AI at some time or other will judge this statement to be false.”
This might very well be a logical fact because it’s truth or falsehood can be determined from the AI’s programming, something quite logically determinate. But it is quite difficult for the AI to discover the truth of the matter.
Ok, fair enough, I guess. I still think you’re not assuming the least convenient possible world; perhaps some astrophysical observation of yours that isn’t available to the AI allows you to have high confidence about some digits of Chaitin’s constant. But that’s much more subtle than what I had in mind when writing the post, so thanks for pointing that out.
Ok, I misunderstood, sorry. I don’t understand the point you were making there, then. My intent was to use a digit large enough that the AI cannot compute it in the time Omega is allowing it; I don’t see any difference between your version and mine, then?
The best approach I know now for constructing an FAI is CEV. An AI that can pull that off should be able to also access any astrophysical data I possess. I am not sure what the point would be if it didn’t. The expected utility of programming the FAI to be able to figure this stuff out is much higher than building it a giant lookup table of stuff I know, unless I had magical advance knowledge that some particular fact that I know will be incredibly useful to the FAI.
Yes, there is no difference, given that you have a sufficiently large digit. The reason I brought up my version is so that you don’t have to worry about computing the truth value of the logical proposition as a strategy, as you don’t even know which logical proposition was used.