When the general case seems confusing, it’s often helpful to work out a specific example.
Let’s say that there are 40 targets that need to be bombed, and each plane can carry 10 bombs normally, or 20 bombs if it doesn’t carry fuel for the return trip. We’ll assume for simplicity that half of the bombs loaded on a plane will find their targets (accounting for misses, duds and planes shot down before they’re finished bombing).
Then, with the normal scheme, it would take eight flights on average to bomb the forty targets, and six of those planes would go down. If instead the planes were loaded with extra bombs instead of return fuel, it would take only four such flights (all of which would of course go down).
If there are eight flight crews to begin with, drawing straws for the doomed flights gives you a 50% chance of surviving, whereas the normal procedure leaves you only a 25% chance. If those are all the missions, it’s clearly rational to prefer the lottery system to the normal one. (If instead the missions are going to continue indefinitely, of course, you’re doomed with probability 1 either way.) And of course the military brass would be happy to achieve a given objective with only 2⁄3 the usual losses.
The difficulty with thinking in terms of “half-lives of danger” is that it’s the reciprocal of your probability of survival, so if you try and treat them as simple disutilities, you’ll run into problems. (For instance, if you’re facing a coinflip between dangerous activities A and B, where A consists of one half-life of danger and B consists of five half-lives, your current predicament is not equivalent to the “average value” of three half-lives of danger.)
Let’s say that there are 40 targets that need to be bombed, and each plane can carry 10 bombs normally, or 20 bombs if it doesn’t carry fuel for the return trip. We’ll assume for simplicity that half of the bombs loaded on a plane will find their targets
What if there’s a hidden variable? Say, a newly-trained flight crew has skill of 1, 2, or 3 with equal probability. On any given mission, each bomb has a 10% chance, multiplied by the crew’s skill, to hit it’s target, and then at mission’s end, assuming adequate fuel, the crew has the same chance of returning alive. If they do so, skill increases by one, to a maximum of seven.
Furthermore, let’s say the military has a very long but finite list of targets to bomb, and is mainly concerned with doing so cost-effectively. Building a new plane and training the crew for it costs 10 resources, and then sending them out on a mission costs resources equal to the number of previous missions that specific crew has been sent on, due to medical care, pensions (even if the crew dies, there are certain obligations to any surviving relatives), mechanical repairs and maintenance, etc.
OK then. You can of course add additional factors to the basic model, and some of these will mitigate or even overwhelm the original effect. No problem with that. However, your original mathematical intuition about the basic model was mistaken, and that’s what I was talking to you about.
In general: let’s say someone proposes a simple mathematical model X for phenomenon Y, and the model gives you conclusion Z.
It’s always a complicated matter whether X is really a good enough model of Y in the relevant way, and so there’s a lot of leeway granted on whether Z should actually be drawn from Y.
However, it’s a simple mathematical fact whether Z should be drawn from X or not, and so a reply that gets the workings of X wrong is going to receive vigorous criticism.
When the general case seems confusing, it’s often helpful to work out a specific example.
Let’s say that there are 40 targets that need to be bombed, and each plane can carry 10 bombs normally, or 20 bombs if it doesn’t carry fuel for the return trip. We’ll assume for simplicity that half of the bombs loaded on a plane will find their targets (accounting for misses, duds and planes shot down before they’re finished bombing).
Then, with the normal scheme, it would take eight flights on average to bomb the forty targets, and six of those planes would go down. If instead the planes were loaded with extra bombs instead of return fuel, it would take only four such flights (all of which would of course go down).
If there are eight flight crews to begin with, drawing straws for the doomed flights gives you a 50% chance of surviving, whereas the normal procedure leaves you only a 25% chance. If those are all the missions, it’s clearly rational to prefer the lottery system to the normal one. (If instead the missions are going to continue indefinitely, of course, you’re doomed with probability 1 either way.) And of course the military brass would be happy to achieve a given objective with only 2⁄3 the usual losses.
The difficulty with thinking in terms of “half-lives of danger” is that it’s the reciprocal of your probability of survival, so if you try and treat them as simple disutilities, you’ll run into problems. (For instance, if you’re facing a coinflip between dangerous activities A and B, where A consists of one half-life of danger and B consists of five half-lives, your current predicament is not equivalent to the “average value” of three half-lives of danger.)
What if there’s a hidden variable? Say, a newly-trained flight crew has skill of 1, 2, or 3 with equal probability. On any given mission, each bomb has a 10% chance, multiplied by the crew’s skill, to hit it’s target, and then at mission’s end, assuming adequate fuel, the crew has the same chance of returning alive. If they do so, skill increases by one, to a maximum of seven.
Furthermore, let’s say the military has a very long but finite list of targets to bomb, and is mainly concerned with doing so cost-effectively. Building a new plane and training the crew for it costs 10 resources, and then sending them out on a mission costs resources equal to the number of previous missions that specific crew has been sent on, due to medical care, pensions (even if the crew dies, there are certain obligations to any surviving relatives), mechanical repairs and maintenance, etc.
What would the optimal strategy be then?
Further complications aren’t relevant to the main point. Do you understand the theory of the basic example now, or do you not?
Yes, I understand the theory.
OK then. You can of course add additional factors to the basic model, and some of these will mitigate or even overwhelm the original effect. No problem with that. However, your original mathematical intuition about the basic model was mistaken, and that’s what I was talking to you about.
In general: let’s say someone proposes a simple mathematical model X for phenomenon Y, and the model gives you conclusion Z.
It’s always a complicated matter whether X is really a good enough model of Y in the relevant way, and so there’s a lot of leeway granted on whether Z should actually be drawn from Y.
However, it’s a simple mathematical fact whether Z should be drawn from X or not, and so a reply that gets the workings of X wrong is going to receive vigorous criticism.
That’s all I have to say about that. We cool?
It is a longstanding policy of mine to avoid bearing malice toward anyone as a result of strictly theoretical matters. In short, yes, we cool.