I guess my position is thus:
While there are sets of probabilities which by themselves are not adequate to capture the information about a decision, there always is a set of probabilities which is adequate to capture the information about a decision.
In that sense I do not see your article as an argument against using probabilities to represent decision information, but rather a reminder to use the correct set of probabilities.
To me the part that stands out the most is the computation of P() by the AI.
From this description, it seems that P is described as essentially omniscient. It knows the locations and velocity of every particle in the universe, and it has unlimited computational power. Regardless of whether possessing and computing with such information is possible, the AI will model P as being literally omniscient. I see no reason that P could not hypothetically reverse the laws of physics and thus would always return 1 or 0 for any statement about reality.
Of course, you could add noise to the inputs to P, or put a strict limit on P’s computational power, or model it as a hypothetical set of sensors which is very fine-grained but not omniscient, but this seems like another set of free variables in the model, in addition to lambda, which could completely undo the entire setup if any were set wrong, and there’s no natural choice for any of them.