Peter de Blanc: You are right and I came to the same conclusion while walking this morning. I was trying to simplify the problem in order to easily obtain numbers <=1/(3^^^^3), which would solve the “paradox”. We now agree that I oversimplified it.
Instead of messing with a proof-like approach again, I will try to clarify my intuition. When you start considering events of that magnitude, you must consider a lot of events (including waking up with blue tentacles as hands to take Eliezer’s example). The total probability is limited to 1 for exclusive events. Without proof, there is no reason to put more probability there than anywhere else. There is not much proof for a device exterior to our universe that can “read” our choice (giving five dollars or not) and then perform said claim. I don’t think that’s even falsifiable “from our universe”.
If the claim is not falsifiable, the AI should not accept unless I do something “impossible” from its current framework of thinking. A proof request that I am thinking of is to do some calculations with the order 3^^^^3 computer and shares easily verifiable results that would otherwise take longer than the age of the universe to obtain. The AI could also ask “simulate me and find a proof that would suit me”. Once the AI is convinced, it could also throw in another five dollars and ask for some algorithm improvements that would require billion years to achieve otherwise. Or for an ssh access on the 3^^^^3 computer.
Peter de Blanc: You are right and I came to the same conclusion while walking this morning. I was trying to simplify the problem in order to easily obtain numbers <=1/(3^^^^3), which would solve the “paradox”. We now agree that I oversimplified it.
Instead of messing with a proof-like approach again, I will try to clarify my intuition. When you start considering events of that magnitude, you must consider a lot of events (including waking up with blue tentacles as hands to take Eliezer’s example). The total probability is limited to 1 for exclusive events. Without proof, there is no reason to put more probability there than anywhere else. There is not much proof for a device exterior to our universe that can “read” our choice (giving five dollars or not) and then perform said claim. I don’t think that’s even falsifiable “from our universe”.
If the claim is not falsifiable, the AI should not accept unless I do something “impossible” from its current framework of thinking. A proof request that I am thinking of is to do some calculations with the order 3^^^^3 computer and shares easily verifiable results that would otherwise take longer than the age of the universe to obtain. The AI could also ask “simulate me and find a proof that would suit me”. Once the AI is convinced, it could also throw in another five dollars and ask for some algorithm improvements that would require billion years to achieve otherwise. Or for an ssh access on the 3^^^^3 computer.