I’ve queued a rather long series of LW posts establishing what I consider the current state of affairs on FAI-related open problems, a few of which concern this issue (and of which the OP is the first).
Nice! Does that mean you have many new results queued for posting? What can I do to learn them sooner? :-)
Unfortunately, I also think that your non-self-referential alternative runs into similar issues (where subsequent agents use successively weaker axiom systems, if they follow the same general design).
Nice! Does that mean you have many new results queued for posting? What can I do to learn them sooner? :-)
Can you expand?