Ngl kinda confused how these points imply the post seems wrong, the bulk of this seems to be (1) a semantic quibble + (2) a disagreement on who has the burden of proof when it comes to arguing about the plausibility of coherence + (3) maybe just misunderstanding the point that’s being made?
(1) I agree the title is a bit needlessly provocative and in one sense of course VNM/Savage etc count as coherence theorems. But the point is that there is another sense that people use “coherence theorem/argument” in this field which corresponds to something like “If you’re not behaving like an EV-maximiser you’re shooting yourself in the foot by your own lights”, which is what brings in all the scary normativity and is what the OP is saying doesn’t follow from any existing theorem unless you make a bunch of other assumptions
(2) The only real substantive objection to the content here seems to be “IMO completeness seems quite reasonable to me”. Why? Having complete preferences seems like a pretty narrow target within the space of all partial orders you could have as your preference relation, so what’s the reason why we should expect minds to steer towards this? Do humans have complete preferences?
(3) In some other comments you’re saying that this post is straw-manning some extreme position because people who use coherence arguments already accept you could have e.g.
>an extremely powerful AI that is VNM rational in all situations except for one tiny thing that does not >matter or will never come up
This seems to be entirely missing the point/confused—OP isn’t saying that agents can realistically get away with not being VNM-rational because its inconsistencies/incompletenesses aren’t efficiently exploitable, they’re saying that you can have an agent that aren’t VNM-rational and aren’t exploitable in principle—i.e., your example is an agent that could in theory be money-pumped by another sufficiently powerful agent that was able to steer the world to where their corner-case weirdness came out—the point being made about incompleteness here is that you can have a non VNM-rational agent that’s not just un-Dutch-Bookable as a matter of empirical reality but in principle. The former still gets you claims like “A sufficiently smart agent will appear VNM-rational to you, they can’t have any obvious public-facing failings”, the latter undermines this
Ngl kinda confused how these points imply the post seems wrong, the bulk of this seems to be (1) a semantic quibble + (2) a disagreement on who has the burden of proof when it comes to arguing about the plausibility of coherence + (3) maybe just misunderstanding the point that’s being made?
(1) I agree the title is a bit needlessly provocative and in one sense of course VNM/Savage etc count as coherence theorems. But the point is that there is another sense that people use “coherence theorem/argument” in this field which corresponds to something like “If you’re not behaving like an EV-maximiser you’re shooting yourself in the foot by your own lights”, which is what brings in all the scary normativity and is what the OP is saying doesn’t follow from any existing theorem unless you make a bunch of other assumptions
(2) The only real substantive objection to the content here seems to be “IMO completeness seems quite reasonable to me”. Why? Having complete preferences seems like a pretty narrow target within the space of all partial orders you could have as your preference relation, so what’s the reason why we should expect minds to steer towards this? Do humans have complete preferences?
(3) In some other comments you’re saying that this post is straw-manning some extreme position because people who use coherence arguments already accept you could have e.g.
>an extremely powerful AI that is VNM rational in all situations except for one tiny thing that does not >matter or will never come up
This seems to be entirely missing the point/confused—OP isn’t saying that agents can realistically get away with not being VNM-rational because its inconsistencies/incompletenesses aren’t efficiently exploitable, they’re saying that you can have an agent that aren’t VNM-rational and aren’t exploitable in principle—i.e., your example is an agent that could in theory be money-pumped by another sufficiently powerful agent that was able to steer the world to where their corner-case weirdness came out—the point being made about incompleteness here is that you can have a non VNM-rational agent that’s not just un-Dutch-Bookable as a matter of empirical reality but in principle. The former still gets you claims like “A sufficiently smart agent will appear VNM-rational to you, they can’t have any obvious public-facing failings”, the latter undermines this