There is an additional constraint which you are missing here. Training pilots is expensive, and the military wants to recoup that cost by sending everyone they bother to train out on a certain number of missions. They weren’t going to just run two rounds and then decide the war’s over; rather, the plan was to keep sending out bombing raids until they run out of pilots, whether death or retirement, and then train some more pilots so they can continue bombing Japan.
That’s at best an argument against the political viability of iterated straw drawing due to general irrationality, not against the rationality of iterated straw drawing itself. The pilots are definitely worse of when everyone is sent to their death. If the pilots opinions don’t matter sending them all to their death is the best option since it saves training costs. If a compromise acceptable to the pilots need to be found then iterated straw drawing is the best option for everyone for any possible mission count and any gived cadre size, provided the pilots can make the necessary precommitment. Command might possibly reject this compromise due to some pilots appearing to be freeloading, but would act irrationally in doing so.
Of course doing the straw drawing at an earlier stage and optimizing training for whether they are going to be sent to their death or not would be even more efficient, but that level of precommitment seems less psychologically plausible.
The problem is, pilots aren’t optimizing for overall survival. Somebody who wanted to live to see the end of the war, at all costs, could’ve just faked some medical problem and gotten themselves a desk job. The perceived-reproductive-fitness boost associated with being a member of the flight crew is contingent on actually flying (and making it back alive, of course). In simpler terms, nobody gets laid by drawing the long straw.
That’s your third completely unconnected argument, and this one doesn’t make Japan and return missions assuming viability of straw drawing rational either. Even if the pilots are rationally maximizing some combination of survival and military glory that doesn’t mean Japan bombing and return missions with most of the load devoted to fuel are an efficient way to gain it. You could have all pilots volunteering to be part of the draw for one way missions and those who draw long being reassigned to Europe or whereever they can earn glory more fuel efficiently.
You’re assuming that straw drawing is viable. I’m trying to show why it wasn’t.
You seem to have a theory, based on that invalid assumption, about what will and will not work to motivate people to take risks. Does that theory make any useful predictions in this case?
You’re assuming that straw drawing is viable. I’m trying to show why it wasn’t.
Then you are wasting everyones time, we already know that it wasn’t viable. It was suggested and rejected. The whole discussion was about a) what would be needed to make viable (e. g. sufficiently high rationality level and sufficiently strong precommitment) and b) whether it would be the rational thing to do given the requirements.
You seem to have a theory, based on that invalid assumption, about what will and will not work to motivate people to take risks. Does that theory make any useful predictions in this case?
No. I was taking your model of what will and will not work to motivate people to take risks and demonstrating that your conclusion did not follow from it.
You’re still not understanding the math here. If we’re looking at this from the military’s perspective, it’s also a win, because an identical quantity of bombs dropped come at the cost of 2 flight crews (and planes) rather than 3.
You had a mistake in your mathematical intuition. That’s OK, it happens to all of us here. The best thing is to admit it. (NB: it’s your argument that’s deeply flawed and not the conclusion, so raising other reasons why this would be a bad policy is irrelevant at the moment.)
I still can’t figure this out. Faced with the choice between leaving, having learned nothing, and continuing to lose karma for every damn post I make, what does your model recommend?
Say you start out with 100 pilots. You have two options:
Send all 100 pilots to Japan with full fuel tanks. 25 make it back alive.
Send 50 pilots to Japan with twice as many bombs. 0 make it back, but 50 are still alive.
the military wants to recoup that cost by sending everyone they bother to train out on a certain number of missions.
No, the military just wants to accomplish its missions. If more pilots are alive after each mission, it means more pilots available for future missions.
When the general case seems confusing, it’s often helpful to work out a specific example.
Let’s say that there are 40 targets that need to be bombed, and each plane can carry 10 bombs normally, or 20 bombs if it doesn’t carry fuel for the return trip. We’ll assume for simplicity that half of the bombs loaded on a plane will find their targets (accounting for misses, duds and planes shot down before they’re finished bombing).
Then, with the normal scheme, it would take eight flights on average to bomb the forty targets, and six of those planes would go down. If instead the planes were loaded with extra bombs instead of return fuel, it would take only four such flights (all of which would of course go down).
If there are eight flight crews to begin with, drawing straws for the doomed flights gives you a 50% chance of surviving, whereas the normal procedure leaves you only a 25% chance. If those are all the missions, it’s clearly rational to prefer the lottery system to the normal one. (If instead the missions are going to continue indefinitely, of course, you’re doomed with probability 1 either way.) And of course the military brass would be happy to achieve a given objective with only 2⁄3 the usual losses.
The difficulty with thinking in terms of “half-lives of danger” is that it’s the reciprocal of your probability of survival, so if you try and treat them as simple disutilities, you’ll run into problems. (For instance, if you’re facing a coinflip between dangerous activities A and B, where A consists of one half-life of danger and B consists of five half-lives, your current predicament is not equivalent to the “average value” of three half-lives of danger.)
Let’s say that there are 40 targets that need to be bombed, and each plane can carry 10 bombs normally, or 20 bombs if it doesn’t carry fuel for the return trip. We’ll assume for simplicity that half of the bombs loaded on a plane will find their targets
What if there’s a hidden variable? Say, a newly-trained flight crew has skill of 1, 2, or 3 with equal probability. On any given mission, each bomb has a 10% chance, multiplied by the crew’s skill, to hit it’s target, and then at mission’s end, assuming adequate fuel, the crew has the same chance of returning alive. If they do so, skill increases by one, to a maximum of seven.
Furthermore, let’s say the military has a very long but finite list of targets to bomb, and is mainly concerned with doing so cost-effectively. Building a new plane and training the crew for it costs 10 resources, and then sending them out on a mission costs resources equal to the number of previous missions that specific crew has been sent on, due to medical care, pensions (even if the crew dies, there are certain obligations to any surviving relatives), mechanical repairs and maintenance, etc.
OK then. You can of course add additional factors to the basic model, and some of these will mitigate or even overwhelm the original effect. No problem with that. However, your original mathematical intuition about the basic model was mistaken, and that’s what I was talking to you about.
In general: let’s say someone proposes a simple mathematical model X for phenomenon Y, and the model gives you conclusion Z.
It’s always a complicated matter whether X is really a good enough model of Y in the relevant way, and so there’s a lot of leeway granted on whether Z should actually be drawn from Y.
However, it’s a simple mathematical fact whether Z should be drawn from X or not, and so a reply that gets the workings of X wrong is going to receive vigorous criticism.
The probability of drawing the long straw twice in a row is four times as high as the probability of making it back twice in a row given 25% survival.
There is an additional constraint which you are missing here. Training pilots is expensive, and the military wants to recoup that cost by sending everyone they bother to train out on a certain number of missions. They weren’t going to just run two rounds and then decide the war’s over; rather, the plan was to keep sending out bombing raids until they run out of pilots, whether death or retirement, and then train some more pilots so they can continue bombing Japan.
That’s at best an argument against the political viability of iterated straw drawing due to general irrationality, not against the rationality of iterated straw drawing itself. The pilots are definitely worse of when everyone is sent to their death. If the pilots opinions don’t matter sending them all to their death is the best option since it saves training costs. If a compromise acceptable to the pilots need to be found then iterated straw drawing is the best option for everyone for any possible mission count and any gived cadre size, provided the pilots can make the necessary precommitment. Command might possibly reject this compromise due to some pilots appearing to be freeloading, but would act irrationally in doing so.
Of course doing the straw drawing at an earlier stage and optimizing training for whether they are going to be sent to their death or not would be even more efficient, but that level of precommitment seems less psychologically plausible.
The problem is, pilots aren’t optimizing for overall survival. Somebody who wanted to live to see the end of the war, at all costs, could’ve just faked some medical problem and gotten themselves a desk job. The perceived-reproductive-fitness boost associated with being a member of the flight crew is contingent on actually flying (and making it back alive, of course). In simpler terms, nobody gets laid by drawing the long straw.
That’s your third completely unconnected argument, and this one doesn’t make Japan and return missions assuming viability of straw drawing rational either. Even if the pilots are rationally maximizing some combination of survival and military glory that doesn’t mean Japan bombing and return missions with most of the load devoted to fuel are an efficient way to gain it. You could have all pilots volunteering to be part of the draw for one way missions and those who draw long being reassigned to Europe or whereever they can earn glory more fuel efficiently.
You’re assuming that straw drawing is viable. I’m trying to show why it wasn’t.
You seem to have a theory, based on that invalid assumption, about what will and will not work to motivate people to take risks. Does that theory make any useful predictions in this case?
Then you are wasting everyones time, we already know that it wasn’t viable. It was suggested and rejected. The whole discussion was about a) what would be needed to make viable (e. g. sufficiently high rationality level and sufficiently strong precommitment) and b) whether it would be the rational thing to do given the requirements.
No. I was taking your model of what will and will not work to motivate people to take risks and demonstrating that your conclusion did not follow from it.
You’re still not understanding the math here. If we’re looking at this from the military’s perspective, it’s also a win, because an identical quantity of bombs dropped come at the cost of 2 flight crews (and planes) rather than 3.
You had a mistake in your mathematical intuition. That’s OK, it happens to all of us here. The best thing is to admit it. (NB: it’s your argument that’s deeply flawed and not the conclusion, so raising other reasons why this would be a bad policy is irrelevant at the moment.)
I still can’t figure this out. Faced with the choice between leaving, having learned nothing, and continuing to lose karma for every damn post I make, what does your model recommend?
Say you start out with 100 pilots. You have two options:
Send all 100 pilots to Japan with full fuel tanks. 25 make it back alive.
Send 50 pilots to Japan with twice as many bombs. 0 make it back, but 50 are still alive.
No, the military just wants to accomplish its missions. If more pilots are alive after each mission, it means more pilots available for future missions.
Honest questions are far less likely to be voted down than wrong statements.
When the general case seems confusing, it’s often helpful to work out a specific example.
Let’s say that there are 40 targets that need to be bombed, and each plane can carry 10 bombs normally, or 20 bombs if it doesn’t carry fuel for the return trip. We’ll assume for simplicity that half of the bombs loaded on a plane will find their targets (accounting for misses, duds and planes shot down before they’re finished bombing).
Then, with the normal scheme, it would take eight flights on average to bomb the forty targets, and six of those planes would go down. If instead the planes were loaded with extra bombs instead of return fuel, it would take only four such flights (all of which would of course go down).
If there are eight flight crews to begin with, drawing straws for the doomed flights gives you a 50% chance of surviving, whereas the normal procedure leaves you only a 25% chance. If those are all the missions, it’s clearly rational to prefer the lottery system to the normal one. (If instead the missions are going to continue indefinitely, of course, you’re doomed with probability 1 either way.) And of course the military brass would be happy to achieve a given objective with only 2⁄3 the usual losses.
The difficulty with thinking in terms of “half-lives of danger” is that it’s the reciprocal of your probability of survival, so if you try and treat them as simple disutilities, you’ll run into problems. (For instance, if you’re facing a coinflip between dangerous activities A and B, where A consists of one half-life of danger and B consists of five half-lives, your current predicament is not equivalent to the “average value” of three half-lives of danger.)
What if there’s a hidden variable? Say, a newly-trained flight crew has skill of 1, 2, or 3 with equal probability. On any given mission, each bomb has a 10% chance, multiplied by the crew’s skill, to hit it’s target, and then at mission’s end, assuming adequate fuel, the crew has the same chance of returning alive. If they do so, skill increases by one, to a maximum of seven.
Furthermore, let’s say the military has a very long but finite list of targets to bomb, and is mainly concerned with doing so cost-effectively. Building a new plane and training the crew for it costs 10 resources, and then sending them out on a mission costs resources equal to the number of previous missions that specific crew has been sent on, due to medical care, pensions (even if the crew dies, there are certain obligations to any surviving relatives), mechanical repairs and maintenance, etc.
What would the optimal strategy be then?
Further complications aren’t relevant to the main point. Do you understand the theory of the basic example now, or do you not?
Yes, I understand the theory.
OK then. You can of course add additional factors to the basic model, and some of these will mitigate or even overwhelm the original effect. No problem with that. However, your original mathematical intuition about the basic model was mistaken, and that’s what I was talking to you about.
In general: let’s say someone proposes a simple mathematical model X for phenomenon Y, and the model gives you conclusion Z.
It’s always a complicated matter whether X is really a good enough model of Y in the relevant way, and so there’s a lot of leeway granted on whether Z should actually be drawn from Y.
However, it’s a simple mathematical fact whether Z should be drawn from X or not, and so a reply that gets the workings of X wrong is going to receive vigorous criticism.
That’s all I have to say about that. We cool?
It is a longstanding policy of mine to avoid bearing malice toward anyone as a result of strictly theoretical matters. In short, yes, we cool.