Let’s say you flip a coin ten times. Would you expect ten heads to occur more often than 1 in 2^10 in your experience, because it is simpler to think “ten heads” than it is to think “HTTHTHHHTH”?
Does your point remain valid if you take a realistic distribution over coin imperfections into account?
Possibly irrelevant calculation follows (do we have hide tags? Apparently not)
Suppose we have the simplest sort of deviation possible: let alpha be a small number
P(10 heads) = (1/2+alpha)^10
P(HTTHTHHHTH) = (1/2+alpha)^6*(1/2-alpha)^4
Remarkably (?)
dP(10 heads) /dalpha = 5⁄256 at alpha=0
dP(HTTHTHHHTH) /dalpha = 1⁄256 at alpha=0
It seems that simple coin deviations (which are by hypothesis the most probable) have a stronger influence on simple predictions such as P(10 heads) than on complicated predictions such as P(HTTHTHHHTH)
Applying this to the real world, the theory predicts that I should expect myself at my current moment to be Kolmogorov simple. I don’t feel particularly simple, but this is different from being simple. There is only strong evidence against the theory if it is probable that simplicity is perceived, conditional on it existing.
I think it would be easy for a conscious being to not perceive it simplicity because my experience with math and science shows that humans often do not easily notice simplicity beyond a very low threshold. Some beings may be below this threshold, such as the first conscious being or the most massive conscious being, but I find it unlikely that beings of this type have a probability anywhere near that of all other conscious beings, especially considering how hard these concepts are to make precise.
Using your example of beings that observe coin tosses, simple but low-probability events may be the easiest way to specify someone, but there could also easily be a way with less complexity but less apparent to the observer. This seems likely enough that not observing high-probability events does not provide exceptionally strong evidence against the theory.
The simplest way to describe either phenomenon—when combined with the other experience that leads me to believe there is a universe beyond my brain—is to describe the universe and point to my brain inside it. If I saw enough coins come up heads in some consistent situation (for example, whenever I try and test anthropic principles) then at some point a lawful universe will cease to be the best explanation. The exact same thing is true for Solomonoff induction as well, though the quantitative details may differ very slightly.
But ordering over the complexity of your brain, rather than the universe, is already postulating that a lawful universe isn’t the best explanation. You can’t have your cake and eat it too.
A lawful universe is the best explanation for my experiences. My experience is embodied in a particular cognitive process. To describe this process I say:
“Consider the system satisfying the law L. To find Paul within that system, look over here.”
In order to describe the version of me that sees 10 heads in a row, I instead have to say:
“Consider the system satisfying the law L, in which these 10 coins came up heads. To find Paul within that universe, look over here.”
The probability of seeing 10 heads in a row may be slightly higher: adding additional explanations increases the probability of an experience, and the description of “arbitrary change” is easier if the change is to make all 10 outcomes H rather than to set the outcomes in some more complicated way. However, the same effect is present in Solomonoff induction.
There are many more subtleties here, and there are universes which involve randomness in a way where I would predict that HHHHHHHHH is the most likely result from looking at 10 coin flips in a row. But the same things happen with Solomonoff induction, so they don’t seem worth talking about here.
Best explanation by what standard? By the standard where you rank universes from least complex to most complex! You cannot do two different rankings simultaneously.
So then, are you saying that you do not think that a simplicity prior on your brain is a good idea?
Shortest explanation for my thoughts. Precisely a simplicity prior on my brain. There is nothing about universe complexity.
I believe that the shortest explanation for my thoughts is the one that says “Here is the universe. Within the universe, here is this dude.” This is a valid explanation for my brain, and it gets longer if I have to modify it to make my brain “simpler” in the sense you are using, not shorter.
No, it doesn’t. Picking between microstates isn’t a “modification” of the universe, it’s simply talking about the observed probability of something that already happens all the time.
Although now that I think about it, this argument should apply to more traditional anthropics as well, if a simplicity prior is used. And since I’ve done this experiment a few times now, I can say with high confidence that a strong simplicity prior is incorrect when flipping coins (especially when anthropically flipping coins [which means I did it myself]), and a maximum entropy prior is very close to correct.
Let’s say you flip a coin ten times. Would you expect ten heads to occur more often than 1 in 2^10 in your experience, because it is simpler to think “ten heads” than it is to think “HTTHTHHHTH”?
Does your point remain valid if you take a realistic distribution over coin imperfections into account?
Possibly irrelevant calculation follows (do we have hide tags? Apparently not)
Suppose we have the simplest sort of deviation possible: let alpha be a small number
P(10 heads) = (1/2+alpha)^10
P(HTTHTHHHTH) = (1/2+alpha)^6*(1/2-alpha)^4
Remarkably (?)
dP(10 heads) /dalpha = 5⁄256 at alpha=0
dP(HTTHTHHHTH) /dalpha = 1⁄256 at alpha=0
It seems that simple coin deviations (which are by hypothesis the most probable) have a stronger influence on simple predictions such as P(10 heads) than on complicated predictions such as P(HTTHTHHHTH)
Applying this to the real world, the theory predicts that I should expect myself at my current moment to be Kolmogorov simple. I don’t feel particularly simple, but this is different from being simple. There is only strong evidence against the theory if it is probable that simplicity is perceived, conditional on it existing.
I think it would be easy for a conscious being to not perceive it simplicity because my experience with math and science shows that humans often do not easily notice simplicity beyond a very low threshold. Some beings may be below this threshold, such as the first conscious being or the most massive conscious being, but I find it unlikely that beings of this type have a probability anywhere near that of all other conscious beings, especially considering how hard these concepts are to make precise.
Using your example of beings that observe coin tosses, simple but low-probability events may be the easiest way to specify someone, but there could also easily be a way with less complexity but less apparent to the observer. This seems likely enough that not observing high-probability events does not provide exceptionally strong evidence against the theory.
No.
The simplest way to describe either phenomenon—when combined with the other experience that leads me to believe there is a universe beyond my brain—is to describe the universe and point to my brain inside it. If I saw enough coins come up heads in some consistent situation (for example, whenever I try and test anthropic principles) then at some point a lawful universe will cease to be the best explanation. The exact same thing is true for Solomonoff induction as well, though the quantitative details may differ very slightly.
But ordering over the complexity of your brain, rather than the universe, is already postulating that a lawful universe isn’t the best explanation. You can’t have your cake and eat it too.
A lawful universe is the best explanation for my experiences. My experience is embodied in a particular cognitive process. To describe this process I say:
“Consider the system satisfying the law L. To find Paul within that system, look over here.”
In order to describe the version of me that sees 10 heads in a row, I instead have to say:
“Consider the system satisfying the law L, in which these 10 coins came up heads. To find Paul within that universe, look over here.”
The probability of seeing 10 heads in a row may be slightly higher: adding additional explanations increases the probability of an experience, and the description of “arbitrary change” is easier if the change is to make all 10 outcomes H rather than to set the outcomes in some more complicated way. However, the same effect is present in Solomonoff induction.
There are many more subtleties here, and there are universes which involve randomness in a way where I would predict that HHHHHHHHH is the most likely result from looking at 10 coin flips in a row. But the same things happen with Solomonoff induction, so they don’t seem worth talking about here.
Best explanation by what standard? By the standard where you rank universes from least complex to most complex! You cannot do two different rankings simultaneously.
So then, are you saying that you do not think that a simplicity prior on your brain is a good idea?
Shortest explanation for my thoughts. Precisely a simplicity prior on my brain. There is nothing about universe complexity.
I believe that the shortest explanation for my thoughts is the one that says “Here is the universe. Within the universe, here is this dude.” This is a valid explanation for my brain, and it gets longer if I have to modify it to make my brain “simpler” in the sense you are using, not shorter.
No, it doesn’t. Picking between microstates isn’t a “modification” of the universe, it’s simply talking about the observed probability of something that already happens all the time.
Although now that I think about it, this argument should apply to more traditional anthropics as well, if a simplicity prior is used. And since I’ve done this experiment a few times now, I can say with high confidence that a strong simplicity prior is incorrect when flipping coins (especially when anthropically flipping coins [which means I did it myself]), and a maximum entropy prior is very close to correct.