Yeah we certainly can’t do better than the optimal Bayes update, and you’re right that any scheme violating that law can’t work. Also, I share your intuition that “iteration can’t work”—that intuition is the main driver of this write-up.
As far as I’m concerned, the central issue is: what actually is the extent of the optimal Bayes update in concept extrapolation? Is it possible that a training set drawn from some limited regime might contain enough information to extrapolate the relevant concept to situations that humans don’t yet understand? The conservation of expected evidence isn’t really sufficient to settle that question, because the iteration might just be a series of computational steps towards a single Bayes update (we do not require that each individual step optimally utilize all available information).
Yeah we certainly can’t do better than the optimal Bayes update, and you’re right that any scheme violating that law can’t work. Also, I share your intuition that “iteration can’t work”—that intuition is the main driver of this write-up.
As far as I’m concerned, the central issue is: what actually is the extent of the optimal Bayes update in concept extrapolation? Is it possible that a training set drawn from some limited regime might contain enough information to extrapolate the relevant concept to situations that humans don’t yet understand? The conservation of expected evidence isn’t really sufficient to settle that question, because the iteration might just be a series of computational steps towards a single Bayes update (we do not require that each individual step optimally utilize all available information).