A sufficient condition for a good [possibly AI-gen] plan
Current posts mainly focus on necessary properties of plans they’d like to see/be executed. I suggest a sufficient condition:
Plan is good, should be acted upon, etc at least when it is endorsed in advance, endorsed in retrospect and endorsed in counterfactual.
Endorsed in advance: everyone relevant hears the plan and possible outcomes in advance, evaluates acceptability and accepts the plan.
Endorsed in retrospect: everyone relevant looks upon intended outcomes, checks what happened actually, evaluates plan and has no regret.
Endorsed in counterfactual: given choice in a set of plans, person would evaluate the specific plan as acceptable—somewhat satisfying them, not inducing much desire to switch.
Choice according to these criteria is still hard, but it should be a bit less mysterious.
A sufficient condition for a good [possibly AI-gen] plan
Current posts mainly focus on necessary properties of plans they’d like to see/be executed. I suggest a sufficient condition:
Endorsed in advance: everyone relevant hears the plan and possible outcomes in advance, evaluates acceptability and accepts the plan.
Endorsed in retrospect: everyone relevant looks upon intended outcomes, checks what happened actually, evaluates plan and has no regret.
Endorsed in counterfactual: given choice in a set of plans, person would evaluate the specific plan as acceptable—somewhat satisfying them, not inducing much desire to switch.
Choice according to these criteria is still hard, but it should be a bit less mysterious.