Linearity is a very strong assumption. A linear approximation is fine when you’re looking at the difference between a 1% chance to die and a 2% chance to die, but overall the phenomena may exhibit nonlinear behavior.
People fight in war because they want to win, but most are willing to lose rather than die. Choosing to fight means that you’d rather risk of death or injury instead of submitting to your foe. The very act of fighting back already may mean that you get offered better terms. It also signals to others that you won’t submit easily. There are benefits to fighting besides victory.
The cost of defeat is definitely a factor. Losing a war in sub-Saharan Africa won’t affect the average American’s day, but if Mecha-Hitler invaded at the head of an army of Nazi vampires, I’m sure people would be willing to tolerate a much higher risk of death when deciding whether to sign up.
They don’t work that way because humans’ utility is not a weighted linear sum, and humans don’t make decisions on the basis of a single calculated number, anyway.
I think your model is wrong. Humans don’t work that way.
They don’t work that way because they are going to war to increase status, by failing to register small risks, or for some other reason?
Linearity is a very strong assumption. A linear approximation is fine when you’re looking at the difference between a 1% chance to die and a 2% chance to die, but overall the phenomena may exhibit nonlinear behavior.
People fight in war because they want to win, but most are willing to lose rather than die. Choosing to fight means that you’d rather risk of death or injury instead of submitting to your foe. The very act of fighting back already may mean that you get offered better terms. It also signals to others that you won’t submit easily. There are benefits to fighting besides victory.
The cost of defeat is definitely a factor. Losing a war in sub-Saharan Africa won’t affect the average American’s day, but if Mecha-Hitler invaded at the head of an army of Nazi vampires, I’m sure people would be willing to tolerate a much higher risk of death when deciding whether to sign up.
They don’t work that way because humans’ utility is not a weighted linear sum, and humans don’t make decisions on the basis of a single calculated number, anyway.