Best explanation by what standard? By the standard where you rank universes from least complex to most complex! You cannot do two different rankings simultaneously.
So then, are you saying that you do not think that a simplicity prior on your brain is a good idea?
Shortest explanation for my thoughts. Precisely a simplicity prior on my brain. There is nothing about universe complexity.
I believe that the shortest explanation for my thoughts is the one that says “Here is the universe. Within the universe, here is this dude.” This is a valid explanation for my brain, and it gets longer if I have to modify it to make my brain “simpler” in the sense you are using, not shorter.
No, it doesn’t. Picking between microstates isn’t a “modification” of the universe, it’s simply talking about the observed probability of something that already happens all the time.
Although now that I think about it, this argument should apply to more traditional anthropics as well, if a simplicity prior is used. And since I’ve done this experiment a few times now, I can say with high confidence that a strong simplicity prior is incorrect when flipping coins (especially when anthropically flipping coins [which means I did it myself]), and a maximum entropy prior is very close to correct.
Best explanation by what standard? By the standard where you rank universes from least complex to most complex! You cannot do two different rankings simultaneously.
So then, are you saying that you do not think that a simplicity prior on your brain is a good idea?
Shortest explanation for my thoughts. Precisely a simplicity prior on my brain. There is nothing about universe complexity.
I believe that the shortest explanation for my thoughts is the one that says “Here is the universe. Within the universe, here is this dude.” This is a valid explanation for my brain, and it gets longer if I have to modify it to make my brain “simpler” in the sense you are using, not shorter.
No, it doesn’t. Picking between microstates isn’t a “modification” of the universe, it’s simply talking about the observed probability of something that already happens all the time.
Although now that I think about it, this argument should apply to more traditional anthropics as well, if a simplicity prior is used. And since I’ve done this experiment a few times now, I can say with high confidence that a strong simplicity prior is incorrect when flipping coins (especially when anthropically flipping coins [which means I did it myself]), and a maximum entropy prior is very close to correct.