Allowing for a margin of error, the simulation would indeed make do with lower fidelity. Yet the smaller the margin of error that is tolerable, the more the predictive model would have to resemble / be isomorphic to the functionality of all components involved in the outcome ((aside from some locally inversed dynamics as the one you pointed out).
Given an example such as “chess novice versus grandmaster”, a very rough model does indeed suffice until you get into extremely small tolerable epsilons (such as “no wrong prediction in 3^^^3 runs”).
However, for the present example, the proportion of one boxers versus two boxers doesn’t at all seem that lopsided.
Thus, to maintain a very high accuracy, the model would need to capture most of that which distinguishes between the two groups. I do grant that as the required accuracy is allowed to decrease to the low sigma range, the model probably would be very different from the actual human being, i.e. those parts that are isomorphic to that human’s thought process may not reflect more than a sliver of that person’s unique cognitive characteristics.
All in the details of the problem, as always. I may have overestimated Omega’s capabilities. (I imagine Omega chuckling in the background.)
Allowing for a margin of error, the simulation would indeed make do with lower fidelity. Yet the smaller the margin of error that is tolerable, the more the predictive model would have to resemble / be isomorphic to the functionality of all components involved in the outcome ((aside from some locally inversed dynamics as the one you pointed out).
Given an example such as “chess novice versus grandmaster”, a very rough model does indeed suffice until you get into extremely small tolerable epsilons (such as “no wrong prediction in 3^^^3 runs”).
However, for the present example, the proportion of one boxers versus two boxers doesn’t at all seem that lopsided.
Thus, to maintain a very high accuracy, the model would need to capture most of that which distinguishes between the two groups. I do grant that as the required accuracy is allowed to decrease to the low sigma range, the model probably would be very different from the actual human being, i.e. those parts that are isomorphic to that human’s thought process may not reflect more than a sliver of that person’s unique cognitive characteristics.
All in the details of the problem, as always. I may have overestimated Omega’s capabilities. (I imagine Omega chuckling in the background.)