It costs you p * $100 for 0 ⇐ p ⇐ 1 where p depends on how “mean” you believe the predictor is.
Left-boxing costs 10^-24 * $1,000,000 = $10^-18 if you value life at a million dollars. Then if p > 10^-20, Left-boxing beats your strategy.
Note that FDT Right-boxes when you give life infinite value.
What’s special in this scenario with regards to valuing life finitely?
If you always value life infinitely, it seems to me all actions you can ever take get infinite values, as there is always a chance you die, which makes decision making on basis of utility pointless.
It doesn’t kill you in a case when you can choose not to be killed, though, and that’s the important thing.
It costs you p * $100 for 0 ⇐ p ⇐ 1 where p depends on how “mean” you believe the predictor is. Left-boxing costs 10^-24 * $1,000,000 = $10^-18 if you value life at a million dollars. Then if p > 10^-20, Left-boxing beats your strategy.
Why would I value my life finitely in this case? (Well, ever, really, but especially in this scenario…)
Also, were you operating under the life-has-infinite-value assumption all along? If so, then
You were incorrect about FDT’s decision in this specific problem
You should probably have mentioned you had this unusual assumption, so we could have resolved this discussion way earlier
Note that FDT Right-boxes when you give life infinite value.
What’s special in this scenario with regards to valuing life finitely?
If you always value life infinitely, it seems to me all actions you can ever take get infinite values, as there is always a chance you die, which makes decision making on basis of utility pointless.