Sounds like the disagreement has mostly landed in the area of questions of what to investigate first, which is pretty firmly “you do you” territory—whatever most improves your own picture of what’s going on, that is very likely what you should be thinking about.
On the other hand, I’m still left feeling like your approach is not going to be embedded enough. You say that investigating 2->3 first risks implicitly assuming too much about 1->2. My sketchy response is that what we want in the end is not a picture which is necessarily even consistent with having any 1->2 view. Everything is embedded, and implicitly reflective, even the decision theorist thinking about what decision theory an agent should have. So, a firm 1->2 view can hurt rather than help, due to overly non-embedded assumptions which have to be discarded later.
Using some of the ideas from the embedded agency sequence: a decision theorist may, in the course of evaluating a decision theory, consider a lot of #1-type situations. However, since the decision theorist is embedded as well, the decision theorist does not want to assume realizability even with respect to their own ontology. So, ultimately, the decision theorist wants a decision theory to have “good behavior” on problems where no #1-type view is available (meaning some sort of optimality for non-realizable cases).
Sounds like the disagreement has mostly landed in the area of questions of what to investigate first, which is pretty firmly “you do you” territory—whatever most improves your own picture of what’s going on, that is very likely what you should be thinking about.
On the other hand, I’m still left feeling like your approach is not going to be embedded enough. You say that investigating 2->3 first risks implicitly assuming too much about 1->2. My sketchy response is that what we want in the end is not a picture which is necessarily even consistent with having any 1->2 view. Everything is embedded, and implicitly reflective, even the decision theorist thinking about what decision theory an agent should have. So, a firm 1->2 view can hurt rather than help, due to overly non-embedded assumptions which have to be discarded later.
Using some of the ideas from the embedded agency sequence: a decision theorist may, in the course of evaluating a decision theory, consider a lot of #1-type situations. However, since the decision theorist is embedded as well, the decision theorist does not want to assume realizability even with respect to their own ontology. So, ultimately, the decision theorist wants a decision theory to have “good behavior” on problems where no #1-type view is available (meaning some sort of optimality for non-realizable cases).