Well, getting into a car with your family is not inherently bad, so it’s not a very good parallel… but if your overall point is that “expected value calculations do not retroactively lose mathematical validity because the world turned out a certain way”, then that’s definitely true.
I think that the “what if it all goes wrong” sort of comment is meant to trigger the response of “oh god… it was all for nothing! Nothing!!!”. Which is silly, of course. We murdered all those people and made those children cry for the expected value of the plan. Complaining that the expected value of an action is not equal to the actual value of the outcome is a pretty elementary mistake.
The features of my plan which mitigate the result of the plan going wrong kick in, and the damage is mitigated. I don’t go on vacation, despite the nonrefundable expenses incurred. The plan didn’t end in death and sadness, even if a particular implementation did.
When the plan ends in murder and children crying, every failure of the plan results in a worse outcome.
When the plan ends in murder and children crying, every failure of the plan results in a worse outcome.
This does not seem to follow. Failure of the plan could easily involve failure to cause the murder or crying to happen for a start. Then there is the consideration that an unspecified failure has completely undefined behaviour. Anything could happen, from extinction or species-wide endless torture to the outright creation of a utopia.
For most people, murder and children crying are a bad outcome for a plan, but if they’re what the planner has selected as the intended outcome, the other probable outcomes are presumably worse. Theoretically, the plan could “fail” and end in an outcome with more utilons than murder and children crying, but those failures are obviously improbable: because if they weren’t, then the planner would presumably have selected them as the desired plan outcome.
I think we need to examine what we mean by ‘fail’.
A plan does not fail simply because the actual outcome is different from the outcome judged most likely; a plan fails when a contingency not prepared for occurs which prevents the intended outcome from being realized, or when an explicit failure state of the plan is reached.
If I plan to go on a vacation and prepare for a major illness by deciding that I will cancel the vacation, then experiencing a major illness might cause the plan to fail- because I have identified that as a failure state. The more important the object of the plan, the harder I will work in the planning stage to minimize the likelihood of ending up in a failure state. (When sending a probe to Mars, for example, I want to be prepared such that everything I can think of that might go wrong along the way still yields a success condition.)
If your plan ends in murder and children crying, what happens if your plan goes wrong?
The murder and children crying fail to occur in the intended quantity?
If your plan requires you to get into a car with your family, what happens if you crash?
Well, getting into a car with your family is not inherently bad, so it’s not a very good parallel… but if your overall point is that “expected value calculations do not retroactively lose mathematical validity because the world turned out a certain way”, then that’s definitely true.
I think that the “what if it all goes wrong” sort of comment is meant to trigger the response of “oh god… it was all for nothing! Nothing!!!”. Which is silly, of course. We murdered all those people and made those children cry for the expected value of the plan. Complaining that the expected value of an action is not equal to the actual value of the outcome is a pretty elementary mistake.
The features of my plan which mitigate the result of the plan going wrong kick in, and the damage is mitigated. I don’t go on vacation, despite the nonrefundable expenses incurred. The plan didn’t end in death and sadness, even if a particular implementation did.
When the plan ends in murder and children crying, every failure of the plan results in a worse outcome.
This does not seem to follow. Failure of the plan could easily involve failure to cause the murder or crying to happen for a start. Then there is the consideration that an unspecified failure has completely undefined behaviour. Anything could happen, from extinction or species-wide endless torture to the outright creation of a utopia.
For most people, murder and children crying are a bad outcome for a plan, but if they’re what the planner has selected as the intended outcome, the other probable outcomes are presumably worse. Theoretically, the plan could “fail” and end in an outcome with more utilons than murder and children crying, but those failures are obviously improbable: because if they weren’t, then the planner would presumably have selected them as the desired plan outcome.
Or at least have the foresight to see that they have become likely and alter the plan such that it now results in utopia instead of murder.
I think we need to examine what we mean by ‘fail’.
A plan does not fail simply because the actual outcome is different from the outcome judged most likely; a plan fails when a contingency not prepared for occurs which prevents the intended outcome from being realized, or when an explicit failure state of the plan is reached.
If I plan to go on a vacation and prepare for a major illness by deciding that I will cancel the vacation, then experiencing a major illness might cause the plan to fail- because I have identified that as a failure state. The more important the object of the plan, the harder I will work in the planning stage to minimize the likelihood of ending up in a failure state. (When sending a probe to Mars, for example, I want to be prepared such that everything I can think of that might go wrong along the way still yields a success condition.)