Suppose the event T is the fact that the universe has no causality, and the event O is that the universe is as orderly as we have observed it to be. Then P(T|O) = P(T)*P(O|T)/P(O). (In general, these probabilities explicitly do not take into account what we actually know about T and O.) I’ll let you pick P(T) and P(O). You can even pick P(T) = 0.99 and P(O) = 0.01. P(O|T), however, is so small that P(T|O), which may be orders of magnitude larger, is still negligibly small.
This is a rationalization. You have just dressed your conclusion in the attire of Bayes, not used Bayes to infer (communicate) the conclusion. The adept of chaos will just reply that O is logical nonsense, and so P(O) doesn’t stand a chance.
And a procatalepsis: if P(O|T) is not small—perhaps because orderly areas are more prominent than chaotic areas—then tell us how probability is determined, such that this orderly area is so likely. If you can’t do that, then you know less than just about anybody.
Bayes’ law time!
Suppose the event T is the fact that the universe has no causality, and the event O is that the universe is as orderly as we have observed it to be. Then P(T|O) = P(T)*P(O|T)/P(O). (In general, these probabilities explicitly do not take into account what we actually know about T and O.) I’ll let you pick P(T) and P(O). You can even pick P(T) = 0.99 and P(O) = 0.01. P(O|T), however, is so small that P(T|O), which may be orders of magnitude larger, is still negligibly small.
This is a rationalization. You have just dressed your conclusion in the attire of Bayes, not used Bayes to infer (communicate) the conclusion. The adept of chaos will just reply that O is logical nonsense, and so P(O) doesn’t stand a chance.
And a procatalepsis: if P(O|T) is not small—perhaps because orderly areas are more prominent than chaotic areas—then tell us how probability is determined, such that this orderly area is so likely. If you can’t do that, then you know less than just about anybody.