It depends upon what the meaning of the word “is” is:
The failure rate has been tested over an immense number of prediction, and evaluated as 10^-24 (to one significant figure). That is the currently accepted estimate for the predictor’s error rate for scenarios randomly selected from the sample.
The failure rate is theoretically 10^-24, over some assumed distribution of agent types. Your decision model may or may not appear anywhere in this distribution.
The failure rate is bounded above by 10^-24 for every possible scenario.
A self-harming agent in this scenario cannot be consistently predicted by Predictor at all (success rate 0%), so we know that (3) is definitely false.
(1) and (2) aren’t strong enough, because it gives little information about Predictor’s error rate concerning your scenario and your decision model.
We have essentially zero information about Predictor’s true error bounds regarding agents that sometimes carry out self-harming actions. In order to recommend taking the left box, an FDT agent is one that sometimes carries out self-harming actions, though this requires that the upper bound on Predictor’s failure of subjunctive dependency is less than the ratio of the utilities of: paying $100, and burning to death all intelligent life in the universe.
We do not have anywhere near enough information to justify that tight a bound. So FDT can’t recommend such an action. Maybe someone else can write a scenario that is in similar spirit, but isn’t so flawed.
It depends upon what the meaning of the word “is” is:
The failure rate has been tested over an immense number of prediction, and evaluated as 10^-24 (to one significant figure). That is the currently accepted estimate for the predictor’s error rate for scenarios randomly selected from the sample.
The failure rate is theoretically 10^-24, over some assumed distribution of agent types. Your decision model may or may not appear anywhere in this distribution.
The failure rate is bounded above by 10^-24 for every possible scenario.
A self-harming agent in this scenario cannot be consistently predicted by Predictor at all (success rate 0%), so we know that (3) is definitely false.
(1) and (2) aren’t strong enough, because it gives little information about Predictor’s error rate concerning your scenario and your decision model.
We have essentially zero information about Predictor’s true error bounds regarding agents that sometimes carry out self-harming actions. In order to recommend taking the left box, an FDT agent is one that sometimes carries out self-harming actions, though this requires that the upper bound on Predictor’s failure of subjunctive dependency is less than the ratio of the utilities of: paying $100, and burning to death all intelligent life in the universe.
We do not have anywhere near enough information to justify that tight a bound. So FDT can’t recommend such an action. Maybe someone else can write a scenario that is in similar spirit, but isn’t so flawed.
Thanks, I appreciate this. Your answer clarifies a lot, and I will think about it more.