After you figure that out, yes you would know what the correct thing to do is, so it’s a good subgoal if doable. High optimality of an outcome doesn’t necessarily imply impossibility of its achievement, so the argument from “too good to be true” is not very reliable.
In this particular case (as with things like UDT), the express aim is to more rigorously understand what optimality is, which is a problem that isn’t concerned with the practical difficulties of actually achieving (getting closer to) it.
After you figure that out, yes you would know what the correct thing to do is, so it’s a good subgoal if doable. High optimality of an outcome doesn’t necessarily imply impossibility of its achievement, so the argument from “too good to be true” is not very reliable.
In this particular case (as with things like UDT), the express aim is to more rigorously understand what optimality is, which is a problem that isn’t concerned with the practical difficulties of actually achieving (getting closer to) it.