I suspect there is some merit to the Scientist’s intuition (and the idea that constant returns are more “empirical”) which nobody has managed to explain well. I’ll try to explain it here.[1]
The Epistemologist’s notion of simplicity is about short programs with unbounded runtime which perfectly explain all evidence. The [non-straw] empiricist notion of simplicity is about short programs with heavily-bounded runtime which approximately explain a subset of the evidence. The Epistemologist is right that there is nothing of value in the empiricist’s notion if you are an unbounded Solomonoff inductor. But for a bounded mind, two important facts come into play:
The implications of hypotheses can only be guessed using “arguments”. These “arguments” become less reliable the more conjunctive they are.
Induction over runtime-bounded programs turns out for some reason to agree with Solomonoff induction way more than the maximum entropy prior does, despite not containing any “correct” hypotheses. This is a super important fact about reality.
Therefore a bounded mind will sometimes get more evidence from “fast-program induction on local data” (i.e. just extrapolate without a gears-level model) than from highly conjunctive arguments about gears-level models.
- ^
FWIW, I agree with the leading bit of Eliezer’s position—that we should think about the object-level and not be dismissive of arguments and concretely imagined gears-level models.
Slightly more spelled-out thoughts about bounded minds:
We can’t actually run the hypotheses of Solomonoff induction. We can only make arguments about what they will output.
In fact, almost all of the relevant uncertainty is logical uncertainty. The “hypotheses” (programs) of Solomonoff induction are not the same as the “hypotheses” entertained by bounded Bayesian minds. I don’t know of any published formal account of what these bounded hypotheses even are and how they relate to Solomonoff induction. But informally, all I’m talking about are ordinary hypotheses like “the Ponzi guy only gets money from new investors”.
In addition to “bounded hypotheses” (of unknown type), we also have “arguments”. An argument is a thing whose existence provides fallible evidence for a claim.
Arguments are made of pieces which can be combined “conjuctively” or “disjunctively”. The conjunction of two subarguments is weaker evidence for its claim than each subargument was for its subclaim. This is the sense in which “big arguments” are worse.