Robin, the circumstances under which a Bayesian will come to believe that a system is optimizing, are the same circumstances under which a message-length minimizer will send a message describing the system’s “preferences”: namely, when your beliefs about its preferences are capable of making ante facto predictions—or at least being more surprised by some outcomes than by others.
Most of the things we find it useful to describe as optimizers, have preferences that are stable over a longer timescale than the repeated observations we make of them. An inductive suspicion of such stability is enough of a prior to deal with aliens (or evolution). Also, different things of the same class often have similar preferences, like multiple humans.
Robin, the circumstances under which a Bayesian will come to believe that a system is optimizing, are the same circumstances under which a message-length minimizer will send a message describing the system’s “preferences”: namely, when your beliefs about its preferences are capable of making ante facto predictions—or at least being more surprised by some outcomes than by others.
Most of the things we find it useful to describe as optimizers, have preferences that are stable over a longer timescale than the repeated observations we make of them. An inductive suspicion of such stability is enough of a prior to deal with aliens (or evolution). Also, different things of the same class often have similar preferences, like multiple humans.