Your proposed solution does not work. As long as there is a chance of failure of this device (and any device will have such a chance), you will not avoid the problems assumed by your premises.
I disagree: the failure rate of the device doesn’t have to be zero for this to work, it just has to be many orders of magnitude lower than the natural failure rate of your body such that you’re vastly more likely to keep living in good health, than experience a device failure.
It’s a difficult, but not inherently impossible engineering problem. Only “false negative” failures (where you’re in poor health but it fails to kill you) count here, so making the device err on the side of killing you for no reason during a suspected failure would actually be “safer” from a QI perspective.
Your proposed solution does not work. As long as there is a chance of failure of this device (and any device will have such a chance), you will not avoid the problems assumed by your premises.
I disagree: the failure rate of the device doesn’t have to be zero for this to work, it just has to be many orders of magnitude lower than the natural failure rate of your body such that you’re vastly more likely to keep living in good health, than experience a device failure.
It’s a difficult, but not inherently impossible engineering problem. Only “false negative” failures (where you’re in poor health but it fails to kill you) count here, so making the device err on the side of killing you for no reason during a suspected failure would actually be “safer” from a QI perspective.