I wonder if there’s also an analogy to the Gibbs sampling algorithm here.
For a believer, it will mostly bounce back and forth from “Assuming God is real, the bible is divinely inspired” and “Assuming the bible is divinely inspired, God must be real”. But if these are not certainties, occasionally it must generate “Assuming God is real, the bible is actually not divinely inspired”. And then from there, probably to “Assuming the bible is not divinely inspired, God is not real.” But then also occasionally it can “recover”, generating “Assuming the bible is not divinely inspired, God is actually real anyway.” So you need that conditional probability too. But given all the conditional probabilities, the resulting chain generates the joint distribution over whether or not the bible is divinely inspired and whether or not God is real.
I wonder if there’s also an analogy to the Gibbs sampling algorithm here.
For a believer, it will mostly bounce back and forth from “Assuming God is real, the bible is divinely inspired” and “Assuming the bible is divinely inspired, God must be real”. But if these are not certainties, occasionally it must generate “Assuming God is real, the bible is actually not divinely inspired”. And then from there, probably to “Assuming the bible is not divinely inspired, God is not real.” But then also occasionally it can “recover”, generating “Assuming the bible is not divinely inspired, God is actually real anyway.” So you need that conditional probability too. But given all the conditional probabilities, the resulting chain generates the joint distribution over whether or not the bible is divinely inspired and whether or not God is real.