When the plan ends in murder and children crying, every failure of the plan results in a worse outcome.
This does not seem to follow. Failure of the plan could easily involve failure to cause the murder or crying to happen for a start. Then there is the consideration that an unspecified failure has completely undefined behaviour. Anything could happen, from extinction or species-wide endless torture to the outright creation of a utopia.
For most people, murder and children crying are a bad outcome for a plan, but if they’re what the planner has selected as the intended outcome, the other probable outcomes are presumably worse. Theoretically, the plan could “fail” and end in an outcome with more utilons than murder and children crying, but those failures are obviously improbable: because if they weren’t, then the planner would presumably have selected them as the desired plan outcome.
I think we need to examine what we mean by ‘fail’.
A plan does not fail simply because the actual outcome is different from the outcome judged most likely; a plan fails when a contingency not prepared for occurs which prevents the intended outcome from being realized, or when an explicit failure state of the plan is reached.
If I plan to go on a vacation and prepare for a major illness by deciding that I will cancel the vacation, then experiencing a major illness might cause the plan to fail- because I have identified that as a failure state. The more important the object of the plan, the harder I will work in the planning stage to minimize the likelihood of ending up in a failure state. (When sending a probe to Mars, for example, I want to be prepared such that everything I can think of that might go wrong along the way still yields a success condition.)
This does not seem to follow. Failure of the plan could easily involve failure to cause the murder or crying to happen for a start. Then there is the consideration that an unspecified failure has completely undefined behaviour. Anything could happen, from extinction or species-wide endless torture to the outright creation of a utopia.
For most people, murder and children crying are a bad outcome for a plan, but if they’re what the planner has selected as the intended outcome, the other probable outcomes are presumably worse. Theoretically, the plan could “fail” and end in an outcome with more utilons than murder and children crying, but those failures are obviously improbable: because if they weren’t, then the planner would presumably have selected them as the desired plan outcome.
Or at least have the foresight to see that they have become likely and alter the plan such that it now results in utopia instead of murder.
I think we need to examine what we mean by ‘fail’.
A plan does not fail simply because the actual outcome is different from the outcome judged most likely; a plan fails when a contingency not prepared for occurs which prevents the intended outcome from being realized, or when an explicit failure state of the plan is reached.
If I plan to go on a vacation and prepare for a major illness by deciding that I will cancel the vacation, then experiencing a major illness might cause the plan to fail- because I have identified that as a failure state. The more important the object of the plan, the harder I will work in the planning stage to minimize the likelihood of ending up in a failure state. (When sending a probe to Mars, for example, I want to be prepared such that everything I can think of that might go wrong along the way still yields a success condition.)