Ok, the intuition pump is problematic in that not only do you know what the first digit of pi is, it is also easy for the AI to calculate.
Perhaps I wasn’t clear. I meant that Omega does not actually tell you what logical proposition it used. The phrase “some logical proposition” is literally what Omega says, it is not a placeholder for something more specific. All you have to go on is that of the things that Omega believes with probability .5, on average half of them are actually true.
Can you imagine a least convenient possible world in which there is a logical fact for Omega to use that you know the answer to, but that is not trivial for the AI to calculate? Would you agree that it makes sense to enter it into the AI’s prior?
No. A properly designed AGI should be able to figure out any logical fact that I know.
My point was that …
My point was that one particular argument you made does not actually support your point.
“After thinking about it for a sufficiently long time, the AI at some time or other will judge this statement to be false.”
This might very well be a logical fact because it’s truth or falsehood can be determined from the AI’s programming, something quite logically determinate. But it is quite difficult for the AI to discover the truth of the matter.
My point was that one particular argument you made does not actually support your point.
Ok, fair enough, I guess. I still think you’re not assuming the least convenient possible world; perhaps some astrophysical observation of yours that isn’t available to the AI allows you to have high confidence about some digits of Chaitin’s constant. But that’s much more subtle than what I had in mind when writing the post, so thanks for pointing that out.
Perhaps I wasn’t clear. I meant that Omega does not actually tell you what logical proposition it used.
Ok, I misunderstood, sorry. I don’t understand the point you were making there, then. My intent was to use a digit large enough that the AI cannot compute it in the time Omega is allowing it; I don’t see any difference between your version and mine, then?
perhaps some astrophysical observation of yours that isn’t available to the AI
The best approach I know now for constructing an FAI is CEV. An AI that can pull that off should be able to also access any astrophysical data I possess. I am not sure what the point would be if it didn’t. The expected utility of programming the FAI to be able to figure this stuff out is much higher than building it a giant lookup table of stuff I know, unless I had magical advance knowledge that some particular fact that I know will be incredibly useful to the FAI.
My intent was to use a digit large enough that the AI cannot compute it in the time Omega is allowing it; I don’t see any difference between your version and mine, then?
Yes, there is no difference, given that you have a sufficiently large digit. The reason I brought up my version is so that you don’t have to worry about computing the truth value of the logical proposition as a strategy, as you don’t even know which logical proposition was used.
Perhaps I wasn’t clear. I meant that Omega does not actually tell you what logical proposition it used. The phrase “some logical proposition” is literally what Omega says, it is not a placeholder for something more specific. All you have to go on is that of the things that Omega believes with probability .5, on average half of them are actually true.
No. A properly designed AGI should be able to figure out any logical fact that I know.
My point was that one particular argument you made does not actually support your point.
I’ve given such a logical fact before.
“After thinking about it for a sufficiently long time, the AI at some time or other will judge this statement to be false.”
This might very well be a logical fact because it’s truth or falsehood can be determined from the AI’s programming, something quite logically determinate. But it is quite difficult for the AI to discover the truth of the matter.
Ok, fair enough, I guess. I still think you’re not assuming the least convenient possible world; perhaps some astrophysical observation of yours that isn’t available to the AI allows you to have high confidence about some digits of Chaitin’s constant. But that’s much more subtle than what I had in mind when writing the post, so thanks for pointing that out.
Ok, I misunderstood, sorry. I don’t understand the point you were making there, then. My intent was to use a digit large enough that the AI cannot compute it in the time Omega is allowing it; I don’t see any difference between your version and mine, then?
The best approach I know now for constructing an FAI is CEV. An AI that can pull that off should be able to also access any astrophysical data I possess. I am not sure what the point would be if it didn’t. The expected utility of programming the FAI to be able to figure this stuff out is much higher than building it a giant lookup table of stuff I know, unless I had magical advance knowledge that some particular fact that I know will be incredibly useful to the FAI.
Yes, there is no difference, given that you have a sufficiently large digit. The reason I brought up my version is so that you don’t have to worry about computing the truth value of the logical proposition as a strategy, as you don’t even know which logical proposition was used.