If people would ask you whether you would kill/did kill a patient, and you couldn’t confidently say “No” (because of the deontological constraint of (meta-)honesty), that would be pretty bad, so you must not kill the patient.
EDIT: honesty must mean keeping promises (to a reasonable degree—it is always possible that something unexpected happens which you didn’t even consider as an improbable possibility when making the promise) to avoid Parfit’s Hitchhiker-like problems.
If I understand correctly, you may also reach your position without using a of non-causal decision theory if you mix utilitarianism with the deontological constraint of being honest (or at least meta-honest [see https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases]) about the moral decisions you would make.
If people would ask you whether you would kill/did kill a patient, and you couldn’t confidently say “No” (because of the deontological constraint of (meta-)honesty), that would be pretty bad, so you must not kill the patient.
EDIT: honesty must mean keeping promises (to a reasonable degree—it is always possible that something unexpected happens which you didn’t even consider as an improbable possibility when making the promise) to avoid Parfit’s Hitchhiker-like problems.