But I mean, isn’t it obvious that damage to the truck alone as a result of the attack would imply quite a higher cost than whatever the shotgun was worth? (And yes, I think this is clearly the case even when you consider that the probability of being attacked is quite a bit less than 100%.) I don’t think this shows lives being insufficiently valued in the military; I think it just shows the sort of pervasive dysfunction we would expect in any large-scale organization lacking internal mechanisms to ensure accountability and proper response to incentives.
But I mean, isn’t it obvious that damage to the truck alone as a result of the attack would imply quite a higher cost than whatever the shotgun was worth?
This still feels like focusing on the wrong aspect of the equation. The dollar value of the shotgun (and truck) is just completely irrelevant. The issues at stake here are:
1. (if you’re being charitable) maintaining a hierarchy where you know that if command asks someone to risk their life, they will risk their life even if they disagree with command’s goals (so that everyone going into battle knows that nobody else is going to defect).
(I think this is more important than the “don’t forget your rifle next time”, although there’s at least a bit of that too)
2. (if you’re being uncharitable) maintaining the hierarchy as a raw exercise in primate power for the sake it.
I had a conversation with a friend on IM about this, in which I fleshed out this position a bit. The friend said something about “the point of the post was to give people a taste of the warrior ethos”. Which seemed sort of true but also not quite the point. (truth-be-told I’m not sure I understood my friend’s comment, but I think my response mostly stands on it’s own)
...
I saw the entire essay through the lens of “if you’re going into combat, you absolutely need to be able to trust everyone to be following predictable procedures that allow you to make snap decisions in combat without worrying if other people are going to follow through.”
The charitable/interesting version of this post, to me, isn’t about warrior ethos and honor. Or, maybe it is (and I’m not steeped in warrior ethos enough to get it), but the relevant thing is about trust ethos, and common knowledge, and reliability.
It’s not intrinsically the case that the way to accomplish this is by orders from high command getting followed absolutely – there are other ways you can come to trust the others in your battalion.
The Ender Wiggin way is to trust that people are doing things that make sense to them and to have a good model of what sort of procedures they’ll be following.
I have no idea if the Ender Wiggin way actually works in practice. I suspect it depends on how accurate you think command’s information is going to generally be in different circumstances and how quickly people in the change might need to make decisions in the face of new information. But the cost of having to model “are your allies who are not in communication going to stick to the plan?” can pretty high in the heat of the moment.
“Dollar value of an insult” is relevant insofar as it’s a proxy for the ultimately utility cost: Can you win the war? Which in turn is directly affected by “can you trust that people will absolutely follow orders from command?”
I felt like the OP and several comments were missing the mark by not at least factoring this into the calculus.
(the alternate interpretation of “high command is having a stupid power trip” or “making a questionable judgment about attention-to-detail in particular” also seem quite plausible, just, those should not be the only things getting discussed here)
I like your lens! For my purposes you are both sitting firmly in true-but-not-quite-the-point, which I consider a good outcome; I realize it is normally good practice to clearly articulate what the larger point is at the outset, but I am walking deliberately into ineffable territory so the usual method of telling readers what the conclusion is seems disingenuous. That being said, I can provide more detail about the lens(es) from which I wrote the post:
1) I have different intuitions about the value of human life from most of the community. These intuitions have been heavily shaped by my experiences in war.
2) I can’t assume anyone else has similar experiences, and they are famously difficult to communicate directly—I need a different way to bridge the inferential gulf.
3) I opted for a concrete question about the reader’s perspective (are you expendable) and provided a concrete personal experience (which says yes) to start.
Speaking to the questions of warrior ethos etc, I put it to you that this is not as distant and exotic a thing as most people suspect. Rather it is made up of things which are closer and you already understand, like trust, common knowledge, and reliability. The hard-to-grok part is exactly how they are arranged, and why they are arranged that way. One important detail is that it does not require understanding, merely execution: in example, lots of soldiers wouldn’t be able to articulate why common knowledge is important even to themselves, but they are perfectly good soldiers because they accept the knowledge-which-is-common and conduct themselves in such a way that it is maintained.
I have decided to write a follow up, which will include a little clarification and a counterfactual to help illustrate. I will continue to use game-theoretic metaphors, but my motivation is not to achieve agreement about any particular detail of military affairs but rather to interrogate the intuitions which allow one to accept them.
A math metaphor, which I may repeat if it makes sense: we could probably come to an agreement about the details of some combat-related point, but my interest is in communicating the parallax between our perspectives of that point.
I strongly suspect that if I do well enough at this, the shift in perspective will allow more nuance about the important problems we are concerned with and our relationship to them. I think this would be valuable to the community.
I’m interested in seeing where you go from here. With the old lesswrong demographic, I would predict you would struggle, due to cryonics/life extension being core to many people’s identities.
I’m not so sure about current LW though. The fraction of the EA crowd that is total utilitarian probably won’t be receptive.
I’m curious what it is that your intuitions do value highly. It might be better to start with that.
I tend to agree with this view. I think that is also one of the aspects implied (sic) by the implicit and explicit communication post: The value of maintaining a highly cohesive and committed team may be a higher value (for a military force) than the risk of loss of life—because in a real war many more lives will be lost (at least that is the reasoning of the military I guess).
But I mean, isn’t it obvious that damage to the truck alone as a result of the attack would imply quite a higher cost than whatever the shotgun was worth? (And yes, I think this is clearly the case even when you consider that the probability of being attacked is quite a bit less than 100%.) I don’t think this shows lives being insufficiently valued in the military; I think it just shows the sort of pervasive dysfunction we would expect in any large-scale organization lacking internal mechanisms to ensure accountability and proper response to incentives.
This still feels like focusing on the wrong aspect of the equation. The dollar value of the shotgun (and truck) is just completely irrelevant. The issues at stake here are:
1. (if you’re being charitable) maintaining a hierarchy where you know that if command asks someone to risk their life, they will risk their life even if they disagree with command’s goals (so that everyone going into battle knows that nobody else is going to defect).
(I think this is more important than the “don’t forget your rifle next time”, although there’s at least a bit of that too)
2. (if you’re being uncharitable) maintaining the hierarchy as a raw exercise in primate power for the sake it.
I had a conversation with a friend on IM about this, in which I fleshed out this position a bit. The friend said something about “the point of the post was to give people a taste of the warrior ethos”. Which seemed sort of true but also not quite the point. (truth-be-told I’m not sure I understood my friend’s comment, but I think my response mostly stands on it’s own)
...
I saw the entire essay through the lens of “if you’re going into combat, you absolutely need to be able to trust everyone to be following predictable procedures that allow you to make snap decisions in combat without worrying if other people are going to follow through.”
The charitable/interesting version of this post, to me, isn’t about warrior ethos and honor. Or, maybe it is (and I’m not steeped in warrior ethos enough to get it), but the relevant thing is about trust ethos, and common knowledge, and reliability.
It’s not intrinsically the case that the way to accomplish this is by orders from high command getting followed absolutely – there are other ways you can come to trust the others in your battalion.
The Ender Wiggin way is to trust that people are doing things that make sense to them and to have a good model of what sort of procedures they’ll be following.
I have no idea if the Ender Wiggin way actually works in practice. I suspect it depends on how accurate you think command’s information is going to generally be in different circumstances and how quickly people in the change might need to make decisions in the face of new information. But the cost of having to model “are your allies who are not in communication going to stick to the plan?” can pretty high in the heat of the moment.
“Dollar value of an insult” is relevant insofar as it’s a proxy for the ultimately utility cost: Can you win the war? Which in turn is directly affected by “can you trust that people will absolutely follow orders from command?”
I felt like the OP and several comments were missing the mark by not at least factoring this into the calculus.
(the alternate interpretation of “high command is having a stupid power trip” or “making a questionable judgment about attention-to-detail in particular” also seem quite plausible, just, those should not be the only things getting discussed here)
I like your lens! For my purposes you are both sitting firmly in true-but-not-quite-the-point, which I consider a good outcome; I realize it is normally good practice to clearly articulate what the larger point is at the outset, but I am walking deliberately into ineffable territory so the usual method of telling readers what the conclusion is seems disingenuous. That being said, I can provide more detail about the lens(es) from which I wrote the post:
1) I have different intuitions about the value of human life from most of the community. These intuitions have been heavily shaped by my experiences in war.
2) I can’t assume anyone else has similar experiences, and they are famously difficult to communicate directly—I need a different way to bridge the inferential gulf.
3) I opted for a concrete question about the reader’s perspective (are you expendable) and provided a concrete personal experience (which says yes) to start.
Speaking to the questions of warrior ethos etc, I put it to you that this is not as distant and exotic a thing as most people suspect. Rather it is made up of things which are closer and you already understand, like trust, common knowledge, and reliability. The hard-to-grok part is exactly how they are arranged, and why they are arranged that way. One important detail is that it does not require understanding, merely execution: in example, lots of soldiers wouldn’t be able to articulate why common knowledge is important even to themselves, but they are perfectly good soldiers because they accept the knowledge-which-is-common and conduct themselves in such a way that it is maintained.
I have decided to write a follow up, which will include a little clarification and a counterfactual to help illustrate. I will continue to use game-theoretic metaphors, but my motivation is not to achieve agreement about any particular detail of military affairs but rather to interrogate the intuitions which allow one to accept them.
A math metaphor, which I may repeat if it makes sense: we could probably come to an agreement about the details of some combat-related point, but my interest is in communicating the parallax between our perspectives of that point.
I strongly suspect that if I do well enough at this, the shift in perspective will allow more nuance about the important problems we are concerned with and our relationship to them. I think this would be valuable to the community.
I’m interested in seeing where you go from here. With the old lesswrong demographic, I would predict you would struggle, due to cryonics/life extension being core to many people’s identities.
I’m not so sure about current LW though. The fraction of the EA crowd that is total utilitarian probably won’t be receptive.
I’m curious what it is that your intuitions do value highly. It might be better to start with that.
I am also uncertain. But it appears to me that even an informed rejection will still be valuable. Follow up here.
1 actually seems way worse than 2 because it concentrates risk. 2 can only get so bad. 1 ends in nuclear holocaust or worse.
I tend to agree with this view. I think that is also one of the aspects implied (sic) by the implicit and explicit communication post: The value of maintaining a highly cohesive and committed team may be a higher value (for a military force) than the risk of loss of life—because in a real war many more lives will be lost (at least that is the reasoning of the military I guess).