I understand your desire to stick to an exegesis of your own essay, but part of a critical examination of your essay is seeing whether or not it is on point, so these sorts of questions really are “about” your essay.
Regardng your preliminary answer, I by “correct” I assume you mean “correctly reflecting the desires of the human supervisors”? (In which case, this discussion feeds into our other thread.)
My bizarre question was just an illustrative example. It seems neither you nor I believe that would be an adequate criterion (though perhaps for different reasons).
If I may translate what you’re saying into my own terms, you’re saying that for a problem like “shoot first or ask first?” the criteria (i.e., constraints) would be highly complex and highly contextual. Ok. I’ll grant that’s a defensible design choice.
Earlier in the thread you said
This is why I have honed in on scenarios where the AI has not yet received feedback on its plan. In these scenarios, the AI presumably must decide (even if the decision is only implicit) whether to consult humans about its plan first, or to go ahead with its plan first (and halt or change course in response to human feedback). To lay my cards on the table, I want to consider three possible policies the AI could have regarding this choice.
Always (or usually) consult first. We can rule this out as impractical, if the AI is making a large number of atomic actions.
Always (or usually) shoot first, and see what the response is. Unless the AI only makes friendly plans, I think this policy is catastrophic, since I believe there are many scenarios where an AI could initiate a plan and before we know what hit us we’re in an unrecoverably bad situation. Therefore, implementing this policy in a non-catastrophic way is FAI-complete.
Have some good critera for picking between “shoot first” or “ask first” on any given chunk of planning. This is what you seem to be favoring in your answer above. (Correct me if I’m wrong.) These criteria will tend to be complex, and not necessarily formulated internally in an axiomatic way. Regardless, I fear making good choices between “shoot first” or “ask first” is hard, even FAI-complete. Screw up once, and you are in a catastrophe like in case 2.
Can you let me know: have I understood you correctly? More importantly, do you agree with my framing of the dilemma for the AI? Do you agree with my assessment of the pitfalls of each of the 3 policies?