So basically, the supposed law behind the method is in getting multiple correctness measurements, which together are expected to strengthen the estimation of correctness of the result.
Two conflicting problems in this particular case. If the measurements are supposed to be independent, so that multiple data points actually reinforce each other and many questions are better than one question, then you can’t expect for the different genies to be talking about the same answer, and so can’t elicit it in the manner described in the post. Conversely, if you can in fact rely on all those genies talking about the same answer, then you can also rely on all your measurements not being independent, and your answer will be picked according to exactly the same criteria as if it was asked straight away.
So basically, the supposed law behind the method is in getting multiple correctness measurements, which together are expected to strengthen the estimation of correctness of the result.
What? No, that’s not paulfchristano’s proposal at all. The reason for splitting up the measurement is to incentivize the genies to give you the proof which is lexicographically first. A perfect proof checker is assumed, so you know that if a proof is given, the result is correct; and the incentives are such that if the genie can find a proof, it will give it.
If it did that, then there would be at least one subgenie that didn’t optimize its utility function—either one reporting no such proof of length n and getting punished when it could have provided a matching proof, or one reporting no such proof of length n and prefix p when it could have provided one and, again, getting punished. Remember, while each invocation (subgenie) has the same utility function, by assumption that function refers to something that can be done to the particular invocation which optimizes it.
You can’t fight superintelligent genies with explicit dependence bias. They are not independent, they can coordinate, and so they will, even if you can frame the problem statement in a way that suggests that they can’t communicate. Worse, as I pointed out, in this particular game they must coordinate to get anything done.
You are arguing that the stated assumptions about the genie’s utility function are unrealistic (which may be true), but presenting it as though you had found a flaw in the proof that follows from those assumptions.
You are arguing that the stated assumptions about the genie’s utility function are unrealistic (which may be true), but presenting it as though you had found a flaw in the proof that follows from those assumptions.
It seems like the assumptions about utility, even if they hold, don’t actually deliver the behavior you expect, because the genies can coordinate. Unless the incentive structure makes sure they won’t try to take over the world in any case, it doesn’t make sure that they won’t try to take over the world if you only ask each of them to answer one binary question either.
Think of the genies as making a single decision that results in all the individual actions of all the individual genies. For the genies, having multiple actors just means raising the stakes by threatening to not free more genies, which you could’ve done for a single-question wish as easily instead of creating an elaborate questioning scheme. You could even let them explicitly discuss what to answer to your question, threatening to terminate them all if examination of their answer reveals any incorrectness!
So basically, the supposed law behind the method is in getting multiple correctness measurements, which together are expected to strengthen the estimation of correctness of the result.
Two conflicting problems in this particular case. If the measurements are supposed to be independent, so that multiple data points actually reinforce each other and many questions are better than one question, then you can’t expect for the different genies to be talking about the same answer, and so can’t elicit it in the manner described in the post. Conversely, if you can in fact rely on all those genies talking about the same answer, then you can also rely on all your measurements not being independent, and your answer will be picked according to exactly the same criteria as if it was asked straight away.
What? No, that’s not paulfchristano’s proposal at all. The reason for splitting up the measurement is to incentivize the genies to give you the proof which is lexicographically first. A perfect proof checker is assumed, so you know that if a proof is given, the result is correct; and the incentives are such that if the genie can find a proof, it will give it.
What reason acts to prevent the genie from giving a proof that is not the lexicographically smallest it can find?
If it did that, then there would be at least one subgenie that didn’t optimize its utility function—either one reporting no such proof of length n and getting punished when it could have provided a matching proof, or one reporting no such proof of length n and prefix p when it could have provided one and, again, getting punished. Remember, while each invocation (subgenie) has the same utility function, by assumption that function refers to something that can be done to the particular invocation which optimizes it.
If there is any genie-value to obtaining control over the real world, instances of genies will coordinate their decisions to get it.
You can’t fight superintelligent genies with explicit dependence bias. They are not independent, they can coordinate, and so they will, even if you can frame the problem statement in a way that suggests that they can’t communicate. Worse, as I pointed out, in this particular game they must coordinate to get anything done.
You are arguing that the stated assumptions about the genie’s utility function are unrealistic (which may be true), but presenting it as though you had found a flaw in the proof that follows from those assumptions.
It seems like the assumptions about utility, even if they hold, don’t actually deliver the behavior you expect, because the genies can coordinate. Unless the incentive structure makes sure they won’t try to take over the world in any case, it doesn’t make sure that they won’t try to take over the world if you only ask each of them to answer one binary question either.
Think of the genies as making a single decision that results in all the individual actions of all the individual genies. For the genies, having multiple actors just means raising the stakes by threatening to not free more genies, which you could’ve done for a single-question wish as easily instead of creating an elaborate questioning scheme. You could even let them explicitly discuss what to answer to your question, threatening to terminate them all if examination of their answer reveals any incorrectness!
Edit: See also Eliezer’s comment.