But what about “dust minds” inside objects existing now, like my table? Given 10^80 particles in the universe and its existence up to date of like 10^17 seconds and their collision every few nanoseconds, where should be very large amount of randomly appearing causal structures which may be similar to experiences of observers?
Basically that the “dust minds” are all crazy, because their internal beliefs correspond to nothing in reality, and there is no causality for them, except by sheer coincidence.
My main true reason for rejecting BBs of most types is this causality breakdown: there’s no point computing the probability of being a BB, because your decision is irrelevant in those cases. In longer-lived Boltzmann Simulations, however, causality matters, so you should include them.
There is a possible type of causal BBs: a process which has a sheer causal skeleton similar to a causal structure of an observer-moment (which itself has, – at first approximation, – a causal structure of convolutional neural net). In that case, there is causality inside just one OM.
But what about “dust minds” inside objects existing now, like my table? Given 10^80 particles in the universe and its existence up to date of like 10^17 seconds and their collision every few nanoseconds, where should be very large amount of randomly appearing causal structures which may be similar to experiences of observers?
I have opinions on this kind of reasoning that I will publish later this month (hopefully), around issues of syntax and semantics.
Did you publish it? link?
Mostly the symbol grounding posts: https://www.lesswrong.com/posts/EEPdbtvW8ei9Yi2e8/bridging-syntax-and-semantics-empirically https://www.lesswrong.com/posts/ix3KdfJxjo9GQFkCo/web-of-connotations-bleggs-rubes-thermostats-and-beliefs https://www.lesswrong.com/posts/XApNuXPckPxwp5ZcW/bridging-syntax-and-semantics-with-quine-s-gavagai
Thanks, I have seen them, but yet have to make a connection between the topic and Boltzmann brains.
Basically that the “dust minds” are all crazy, because their internal beliefs correspond to nothing in reality, and there is no causality for them, except by sheer coincidence.
See also this old post: https://www.lesswrong.com/posts/295KiqZKAb55YLBzF/hedonium-s-semantic-problem
My main true reason for rejecting BBs of most types is this causality breakdown: there’s no point computing the probability of being a BB, because your decision is irrelevant in those cases. In longer-lived Boltzmann Simulations, however, causality matters, so you should include them.
There is a possible type of causal BBs: a process which has a sheer causal skeleton similar to a causal structure of an observer-moment (which itself has, – at first approximation, – a causal structure of convolutional neural net). In that case, there is causality inside just one OM.