I’m not merely saying that agents shouldn’t have precise credences when modeling environments more complex than themselves
You seem to be underestimating how pervasive / universal this critique is—essentially every environment is more complex than we are, at the very least when we’re embedded agents, or other humans are involved. So I’m not sure where your criticism (which I agree with) is doing more than the basic argument is in a very strong way—it just seems to be stating it more clearly.
The problem is that Kolmogorov complexity depends on the language in which algorithms are described. Whatever you want to say about invariances with respect to the description language, this has the following unfortunate consequence for agents making decisions on the basis of finite amounts of data: For any finite sequence of observations, we can always find a silly-looking language in which the length of the shortest program outputting those observations is much lower than that in a natural-looking language (but which makes wildly different predictions of future data).
Far less confident here, but I think this isn’t correct as a mater of practice. Conceptually, Solomonoff doesn’t say “pick an arbitrary language once you’ve seen the data and then do the math” it says “pick an arbitrary language before you’ve seen any data and then do the math.” And if we need to implement the silly looking language, there is a complexity penalty to doing that, one that’s going to be similarly large regardless of what baseline we choose, and we can determine how large it is in reducing the language to some other language. (And I may be wrong, but picking a language cleverly should not means that Kolmogorov complexity will change something requiring NP programs to encode into something that P programs can encode, so this criticism seems weak anyways outside of toy examples.)
The obvious answer is only when there is enough indeterminacy to matter; I’m not sure if anyone would disagree. Because the question isn’t whether there is indeterminacy, it’s how much, and whether it’s worth the costs of using a more complex model instead of doing it the Bayesian way.
You also didn’t quite endorse suspending judgement in that case—“If someone forced you to give a best guess one way or the other, you suppose you’d say “decrease”. Yet, this feels so arbitrary that you can’t help but wonder whether you really need to give a best guess at all…” So, yes, if it’s not directly decision relevant, sure, don’t pick, say you’re uncertain. Which is best practice even if you use precise probability—you can have a preference for robust decisions, or a rule for withholding judgement when your confidence is low. But if it is decision relevant, and there is only a binary choice available, your best guess matters. And this is exactly why Eliezer says that when there is a decision, you need to focus your indeterminacy, and why he was dismissive of DS and similar approaches.