When we query the Bayesian network is conditionally dependent on nodes with a high degree of expected future change [...].
But the point about meta probability is that we do not have the nodes. Each meta level corresponds to one nesting of networks in nodes.
If you maintain discipline and keep yourself [...] as a part of the system, you can as perfectly calculate your current self’s expected probability without “metaprobability.”
Only in so far as you approximate yourself simply as per above.This discards information.
But the point about meta probability is that we do not have the nodes. Each meta level corresponds to one nesting of networks in nodes.
Think of Bayesian graphs as implicitly complete, with the set of nodes being every thing to which you have a referent. If you can even say “this proposition” meaningfully, a perfect Bayesian implemented as a brute-force Bayesian network could assign it a node connected to all other nodes, just with trivial conditional probabilities that give the same results as an unconnected node.
A big part of this discussion has been whether some referents (like black boxes) actually do have such trivial conditional probabilities which end up returning an inference of 50%. It certainly feels like some referents should have no precedent, and yet it also feels like we still don’t say 50%. This is because they actually do have precedent (and conditional probabilities), it’s just that our internal reasonings are not always consciously available.
Sure you can always use the total net of all possible proposition. But the set of all propositions is intractable. It may not even be sensibly enumerable.
For nested nets at least you can construct the net of the powerset of the nodes and that will do the job—in theory. In practive even that is horribly inefficient. And even though our brain is massively parallel it surely doesn’t do that.
But the point about meta probability is that we do not have the nodes. Each meta level corresponds to one nesting of networks in nodes.
Only in so far as you approximate yourself simply as per above.This discards information.
Think of Bayesian graphs as implicitly complete, with the set of nodes being every thing to which you have a referent. If you can even say “this proposition” meaningfully, a perfect Bayesian implemented as a brute-force Bayesian network could assign it a node connected to all other nodes, just with trivial conditional probabilities that give the same results as an unconnected node.
A big part of this discussion has been whether some referents (like black boxes) actually do have such trivial conditional probabilities which end up returning an inference of 50%. It certainly feels like some referents should have no precedent, and yet it also feels like we still don’t say 50%. This is because they actually do have precedent (and conditional probabilities), it’s just that our internal reasonings are not always consciously available.
Sure you can always use the total net of all possible proposition. But the set of all propositions is intractable. It may not even be sensibly enumerable.
For nested nets at least you can construct the net of the powerset of the nodes and that will do the job—in theory. In practive even that is horribly inefficient. And even though our brain is massively parallel it surely doesn’t do that.