Let’s say that there are 40 targets that need to be bombed, and each plane can carry 10 bombs normally, or 20 bombs if it doesn’t carry fuel for the return trip. We’ll assume for simplicity that half of the bombs loaded on a plane will find their targets
What if there’s a hidden variable? Say, a newly-trained flight crew has skill of 1, 2, or 3 with equal probability. On any given mission, each bomb has a 10% chance, multiplied by the crew’s skill, to hit it’s target, and then at mission’s end, assuming adequate fuel, the crew has the same chance of returning alive. If they do so, skill increases by one, to a maximum of seven.
Furthermore, let’s say the military has a very long but finite list of targets to bomb, and is mainly concerned with doing so cost-effectively. Building a new plane and training the crew for it costs 10 resources, and then sending them out on a mission costs resources equal to the number of previous missions that specific crew has been sent on, due to medical care, pensions (even if the crew dies, there are certain obligations to any surviving relatives), mechanical repairs and maintenance, etc.
OK then. You can of course add additional factors to the basic model, and some of these will mitigate or even overwhelm the original effect. No problem with that. However, your original mathematical intuition about the basic model was mistaken, and that’s what I was talking to you about.
In general: let’s say someone proposes a simple mathematical model X for phenomenon Y, and the model gives you conclusion Z.
It’s always a complicated matter whether X is really a good enough model of Y in the relevant way, and so there’s a lot of leeway granted on whether Z should actually be drawn from Y.
However, it’s a simple mathematical fact whether Z should be drawn from X or not, and so a reply that gets the workings of X wrong is going to receive vigorous criticism.
What if there’s a hidden variable? Say, a newly-trained flight crew has skill of 1, 2, or 3 with equal probability. On any given mission, each bomb has a 10% chance, multiplied by the crew’s skill, to hit it’s target, and then at mission’s end, assuming adequate fuel, the crew has the same chance of returning alive. If they do so, skill increases by one, to a maximum of seven.
Furthermore, let’s say the military has a very long but finite list of targets to bomb, and is mainly concerned with doing so cost-effectively. Building a new plane and training the crew for it costs 10 resources, and then sending them out on a mission costs resources equal to the number of previous missions that specific crew has been sent on, due to medical care, pensions (even if the crew dies, there are certain obligations to any surviving relatives), mechanical repairs and maintenance, etc.
What would the optimal strategy be then?
Further complications aren’t relevant to the main point. Do you understand the theory of the basic example now, or do you not?
Yes, I understand the theory.
OK then. You can of course add additional factors to the basic model, and some of these will mitigate or even overwhelm the original effect. No problem with that. However, your original mathematical intuition about the basic model was mistaken, and that’s what I was talking to you about.
In general: let’s say someone proposes a simple mathematical model X for phenomenon Y, and the model gives you conclusion Z.
It’s always a complicated matter whether X is really a good enough model of Y in the relevant way, and so there’s a lot of leeway granted on whether Z should actually be drawn from Y.
However, it’s a simple mathematical fact whether Z should be drawn from X or not, and so a reply that gets the workings of X wrong is going to receive vigorous criticism.
That’s all I have to say about that. We cool?
It is a longstanding policy of mine to avoid bearing malice toward anyone as a result of strictly theoretical matters. In short, yes, we cool.