Probably the first thing to clarify is that I feel like you equivocate between the grader being something that is embedded in the real world and hence subject to manipulation by real-world consequences of the actor’s actions, and the grader being something that operates on plans in the agent’s head in order to select the best one. In the latter case the grader is still subject to manipulation, but the prospects for manipulation seems unrelated to the open-endedness of the domain and unrelated to taking dangerous actions.
This seems like a misunderstanding. While I’ve previously communicated to you arguments about problems with manipulating embedded grading functions, that is not at all what this post is intended to be about. I’ll edit the post to make the intended reading more obvious. None of this post’s arguments rely on the grader being embedded and therefore physically manipulable. As I wrote in footnote 1:
I’m notassuming the actor wants to maximize the literal physical output of the grader, but rather just the “spirit” of the grader. More formally, the actor is trying to argmaxplan pGrader(p), where Grader can be defined over the agent’s internal plan ontology.
Anyways, replying in particular to:
the prospects for manipulation seems unrelated to the open-endedness of the domain and unrelated to taking dangerous actions.
Open-ended domains are harder to grade robustly on all inputs because more stuff can happen, and the plan space gets exponentially larger since the branching factor is the number of actions. EG it’s probably far harder to produce an emotionally manipulative-to-the-grader DOTA II game state (e.g. I look at it and feel compelled to output a ridiculously high number), than a manipulative state in the real world (which plays to e.g. their particular insecurities and desires, perhaps reminding them of triggering events from their past in order to make their judgments higher-variance).
This seems like a misunderstanding. While I’ve previously communicated to you arguments about problems with manipulating embedded grading functions, that is not at all what this post is intended to be about. I’ll edit the post to make the intended reading more obvious. None of this post’s arguments rely on the grader being embedded and therefore physically manipulable. As I wrote in footnote 1:
Anyways, replying in particular to:
Open-ended domains are harder to grade robustly on all inputs because more stuff can happen, and the plan space gets exponentially larger since the branching factor is the number of actions. EG it’s probably far harder to produce an emotionally manipulative-to-the-grader DOTA II game state (e.g. I look at it and feel compelled to output a ridiculously high number), than a manipulative state in the real world (which plays to e.g. their particular insecurities and desires, perhaps reminding them of triggering events from their past in order to make their judgments higher-variance).