Bayesian reasoning works by starting with a large collection of possible environments, and as you observe facts that are inconsistent with some of those environments, you rule them out. What does reasoning look like when you’re not even capable of storing a single valid hypothesis for the way the world works? Emmy is going to have to use a different type of reasoning, and make updates that don’t fit into the standard Bayesian framework.
I think maybe this paragraph should say “Solomonoff induction” instead of Bayesian reasoning. If I’m reasoning about a coin, and I have a model with a single parameter representing the coin’s bias, there’s a sense in which I’m doing Bayesian reasoning and there is some valid hypothesis for the coin’s bias. Most applied Bayesian ML work looks more like discovering a coin’s bias than thinking about the world at a sufficiently high resolution for the algorithm to be modeling itself, so this seems like an important distinction.
From the version of this post on the MIRI blog:
I think maybe this paragraph should say “Solomonoff induction” instead of Bayesian reasoning. If I’m reasoning about a coin, and I have a model with a single parameter representing the coin’s bias, there’s a sense in which I’m doing Bayesian reasoning and there is some valid hypothesis for the coin’s bias. Most applied Bayesian ML work looks more like discovering a coin’s bias than thinking about the world at a sufficiently high resolution for the algorithm to be modeling itself, so this seems like an important distinction.