Now, take the above and change it to Omega predicting you will give it $100 unless X is true. Nothing important changes, at all. You can’t make X true or untrue by changing your choice.
Wait, are you thinking I’m thinking I can determine the umpteenth digit of pi in my scenario? I see your point; that would be insane.
My point is simply this: if your existence (or any other observation of yours) allows you to infer the umpteenth digit of pi is odd, then the AI you build should be allowed to use that fact, instead of trying to maximize utility even in the logically impossible world where that digit is even.
The goal of my thought experiment was to construct a situation like in Wei Dai’s post, where if you lived two million years ago you’d want your AI to press the button, because it would give humanity a 50% chance of survival and a 50% chance of later death instead of a 50% chance of survival and a 50% chance of earlier death; I wanted to argue that despite the fact that you’d’ve built the AI that way two million years ago, you shouldn’t today, because you don’t want it to maximize probability in worlds you know to be impossible.
I guess the issue was muddled by the fact that my scenario didn’t clearly rule out the possibility that the digit is even but you (the human AI creator) are alive because Omega predicted the AI would press the button. I can’t offhand think of a modification of my original thought experiment that would take care of that problem and still be obviously analgous to Wei Dai’s scenario, but from my perspective, at least, nothing would change in my argument if, if the digit is even, and Omega predicted that the AI would press the button and so Omega didn’t destroy the world, then Omega turned Alpha Centauri purple; since Alpha Centauri isn’t purple, you can conclude that the digit is odd. [Edit: changed the post to include that proviso.]
(But if you had built your AI two million years ago, you’d’ve programmed it in such a way that it would press the button even if it observes Alpha Centauri to be purple—because then, you would really have to make the 50⁄50 decision that Wei Dai has in mind.)
Wait, are you thinking I’m thinking I can determine the umpteenth digit of pi in my scenario? I see your point; that would be insane.
My point is simply this: if your existence (or any other observation of yours) allows you to infer the umpteenth digit of pi is odd, then the AI you build should be allowed to use that fact, instead of trying to maximize utility even in the logically impossible world where that digit is even.
Actually you were:
There are four possibilities:
The AI will press the button, the digit is even
The AI will not press the button, the digit is even, you don’t exist
The AI will press the button, the digit is odd, the word will kaboom
The AI will not press the button, the digit is odd.
Updating on the fact that the second possibility is not true is precisely equivalent to concluding that if the AI does not press the button the digit must be odd, and ensuring that the AI does not means choosing the digit to be odd.
If you already know that the digit is odd independent from the choice of the AI the whole thing reduces to a high stakes counterfactual mugging (if the destruction by Omega if the digit is even depends on what the AI knowing the digit to be odd would do, otherwise there is no dilemma in the first place).
Updating on the fact that the second possibility is not true is precisely equivalent to concluding that if the AI does not press the button the digit must be odd, and ensuring that the AI does not means choosing the digit to be odd.
There is nothing insane about this, provided that it is properly understood. The resolution is essentially the same as the resolution of the paradox of free will in a classically-deterministic universe.
In a classically-deterministic universe, all of your choices are mathematical consequences of the universe’s state 1 million years ago. And people often confused themselves by thinking, “Suppose that my future actions are under my control. Well, I will choose to take a certain action if and only if certain mathematical propositions are true (namely, the propositions necessary to deduce my choice from the state of the universe 1 million years ago). Therefore, by choosing to take that action, I am getting to decide the truth-values of those propositions. But the truth-values of mathematical propositions is beyond my control, so my future actions must also be beyond my control.”
I think that people here generally get that this kind of thinking is confused. Even if we lived in a classically-deterministic universe, we could still think of ourselves as choosing our actions without concluding that we get to determine mathematical truth on a whim.
Similarly, Benja’s AI can think of itself as getting to choose whether to push the button without thereby implying that it has the power to modify mathematical truth.
Similarly, Benja’s AI can think of itself as getting to choose whether to push the button without thereby thinking that it has the power to modify mathematical truth.
I think we’re all on the same page about being able to choose some mathematical truths, actually. What FAWS and I think is that in the setup I described, the human/AI does not get to determine the digit of pi, because the computation of the digits of pi does not involve a computation of the human’s choices in the thought experiment. [Unless of course by incredible mathematical coincidence, the calculation of digits of pi happens to be a universal computer, happens to simulate our universe, and by pure luck happens to depend on our choices just at the umpteenth digit. My math knowledge doesn’t suffice to rule that possibility out, but it’s not just astronomically but combinatorially unlikely, and not what any of us has in mind, I’m sure.]
I’ll grant you that my formulation had a serious bug, but--
There are four possibilities:
The AI will press the button, the digit is even
The AI will not press the button, the digit is even, you don’t exist
The AI will press the button, the digit is odd, the word will kaboom
The AI will not press the button, the digit is odd.
Updating on the fact that the second possibility is not true is precisely equivalent to concluding that if the AI does not press the button the digit must be odd
Yes, if by that sentence you mean the logical proposition (AI presses button ⇒ digit is odd), also known as (digit odd \/ ~AI presses button).
and ensuring that the AI does not means choosing the digit to be odd.
I’ll only grant that if I actually end up building an AI that presses the button, and the digit is even, then Omega is a bad predictor, which would make the problem statement contradictory. Which is bad enough, but I don’t think I can be accused of minting causality from logical implication signs...
In any case,
If you already know that the digit is odd independent from the choice of the AI the whole thing reduces to a high stakes counterfactual mugging
That’s true. I think that’s also what Wei Dai had in mind in http://lesswrong.com/lw/214/late_great_filter_is_not_bad_news/ of the great filter post (and not the ability to change Omega’s coin to tails by not pressing the button!). My position is that you should not pay in counterfactual muggings whose counterfactuality was already known prior to your decision to become a timeless decision theorist, although you should program (yourself | your AI) to pay in counterfactual muggings you don’t yet know to be counterfactual.
Wait, are you thinking I’m thinking I can determine the umpteenth digit of pi in my scenario? I see your point; that would be insane.
My point is simply this: if your existence (or any other observation of yours) allows you to infer the umpteenth digit of pi is odd, then the AI you build should be allowed to use that fact, instead of trying to maximize utility even in the logically impossible world where that digit is even.
The goal of my thought experiment was to construct a situation like in Wei Dai’s post, where if you lived two million years ago you’d want your AI to press the button, because it would give humanity a 50% chance of survival and a 50% chance of later death instead of a 50% chance of survival and a 50% chance of earlier death; I wanted to argue that despite the fact that you’d’ve built the AI that way two million years ago, you shouldn’t today, because you don’t want it to maximize probability in worlds you know to be impossible.
I guess the issue was muddled by the fact that my scenario didn’t clearly rule out the possibility that the digit is even but you (the human AI creator) are alive because Omega predicted the AI would press the button. I can’t offhand think of a modification of my original thought experiment that would take care of that problem and still be obviously analgous to Wei Dai’s scenario, but from my perspective, at least, nothing would change in my argument if, if the digit is even, and Omega predicted that the AI would press the button and so Omega didn’t destroy the world, then Omega turned Alpha Centauri purple; since Alpha Centauri isn’t purple, you can conclude that the digit is odd. [Edit: changed the post to include that proviso.]
(But if you had built your AI two million years ago, you’d’ve programmed it in such a way that it would press the button even if it observes Alpha Centauri to be purple—because then, you would really have to make the 50⁄50 decision that Wei Dai has in mind.)
Actually you were: There are four possibilities:
The AI will press the button, the digit is even
The AI will not press the button, the digit is even, you don’t exist
The AI will press the button, the digit is odd, the word will kaboom
The AI will not press the button, the digit is odd.
Updating on the fact that the second possibility is not true is precisely equivalent to concluding that if the AI does not press the button the digit must be odd, and ensuring that the AI does not means choosing the digit to be odd.
If you already know that the digit is odd independent from the choice of the AI the whole thing reduces to a high stakes counterfactual mugging (if the destruction by Omega if the digit is even depends on what the AI knowing the digit to be odd would do, otherwise there is no dilemma in the first place).
There is nothing insane about this, provided that it is properly understood. The resolution is essentially the same as the resolution of the paradox of free will in a classically-deterministic universe.
In a classically-deterministic universe, all of your choices are mathematical consequences of the universe’s state 1 million years ago. And people often confused themselves by thinking, “Suppose that my future actions are under my control. Well, I will choose to take a certain action if and only if certain mathematical propositions are true (namely, the propositions necessary to deduce my choice from the state of the universe 1 million years ago). Therefore, by choosing to take that action, I am getting to decide the truth-values of those propositions. But the truth-values of mathematical propositions is beyond my control, so my future actions must also be beyond my control.”
I think that people here generally get that this kind of thinking is confused. Even if we lived in a classically-deterministic universe, we could still think of ourselves as choosing our actions without concluding that we get to determine mathematical truth on a whim.
Similarly, Benja’s AI can think of itself as getting to choose whether to push the button without thereby implying that it has the power to modify mathematical truth.
I think we’re all on the same page about being able to choose some mathematical truths, actually. What FAWS and I think is that in the setup I described, the human/AI does not get to determine the digit of pi, because the computation of the digits of pi does not involve a computation of the human’s choices in the thought experiment. [Unless of course by incredible mathematical coincidence, the calculation of digits of pi happens to be a universal computer, happens to simulate our universe, and by pure luck happens to depend on our choices just at the umpteenth digit. My math knowledge doesn’t suffice to rule that possibility out, but it’s not just astronomically but combinatorially unlikely, and not what any of us has in mind, I’m sure.]
I’ll grant you that my formulation had a serious bug, but--
Yes, if by that sentence you mean the logical proposition (AI presses button ⇒ digit is odd), also known as (digit odd \/ ~AI presses button).
I’ll only grant that if I actually end up building an AI that presses the button, and the digit is even, then Omega is a bad predictor, which would make the problem statement contradictory. Which is bad enough, but I don’t think I can be accused of minting causality from logical implication signs...
In any case,
That’s true. I think that’s also what Wei Dai had in mind in http://lesswrong.com/lw/214/late_great_filter_is_not_bad_news/ of the great filter post (and not the ability to change Omega’s coin to tails by not pressing the button!). My position is that you should not pay in counterfactual muggings whose counterfactuality was already known prior to your decision to become a timeless decision theorist, although you should program (yourself | your AI) to pay in counterfactual muggings you don’t yet know to be counterfactual.