Yes, you need to have a theory of physics to write down a transition rule for a physical system. That is a problem, but it’s not at all the same problem as the “target format” problem. The only role the transition rule plays here is it allows one to apply induction to efficiently prove some generalization about the system over all time steps.
In principle a different more distinguished concise description of the system’s behaviour could play the a similar role (perhaps, the recording of the states of the system + the shortest program that outputs the recording?). Or perhaps there’s some way of choosing a distinguished “best” formalization of physics. But that’s rather out of scope of what I wanted to suggest here.
But then you are measuring proof shortness relative to that system. And you could be using one of countless other formal systems which always make the same predictions, but relative to which different proofs are short and long.
It would be a O(1) cost to start the proof by translating the axioms into a more convenient format. Much as Kolmogorov complexity is “language dependent” but not asymptotically because any particular universal turing machine can be simulated in any other for a constant cost.
The assumption (including that it takes in and puts out in arabic numerals, and uses “*” as the multuplication command, and that buttons must be pressed,… and all the other things you need to actually use it) includes that.
These are all things that can be derived from a physical description of the calculator (maybe not in fewer steps than it takes to do long multiplication, but certainly in fewer steps than less trivial computations one might do with a calculator). There’s no observer dependency here.
No, that doesn’t work. It seems to me you’ve confused yourself by constructing a fake symmetry between these problems. It wouldn’t make any sense for Omega to “predict” whether you choose both boxes in Newcomb’s if Newcomb’s were equivalent to something that doesn’t involve choosing boxes.
More explicitly:
Newcomb’s Problem is “You sit in front of a pair of boxes, which are either- both filled with money if Omega predicted you would take one box in this case, otherwise only one is filled”. Note: describing the problem does not require mentioning “Newcomb’s Problem”; it can be expressed as a simple game tree (see here for some explanation of the tree format):
In comparison, your “Inverse Newcomb” is “Omega gives you some money iff it predicts that you take both boxes in Newcomb’s Problem, an entirely different scenario (ie. not this case).”
The latter is more of the form “Omega arbitrarily rewards agents for taking certain hypothetical actions in a different problem” (of which a nearly limitless variety can be invented to justify any chosen decision theory¹), rather than being an actual self-contained problem which can be “solved”.
The latter also can’t be expressed as any kind of game tree without “cheating” and naming “Newcomb’s Problem” verbally—or rather, you can express a similar thing by embedding the Newcomb game tree and referring to the embedded tree, but that converts it into a legitimate decision problem, which FDT of course gives the correct answer to (TODO: draw an example ;).
(¹): Consider Inverse^2 Newcomb, which I consider the proper symmetric inverse of “Inverse Newcomb”: Omega puts you in front of two boxes and says “this is not Newcomb’s Problem, but I have filled both boxes with money iff I predicted that you take one box in standard Newcomb”. Obviously here FDT takes both boxes and a tidy $1,000,1000 profit (plus the $1,000,000 from Standard Newcomb). Whereas CDT gets… $1000 (plus $1000 from Standard Newcomb).