My argument is not that that mental states/experiences/whatever are simple.
The argument is: However complicated mental experiences might be, the following is true:
My mind is less complex than (my mind + the rest of the universe)
Even more so if i consider that the universerse entails a lot of minds that are as complex as my own.
By the way, i am not trying to prove solipsism, but to disprove solomonoff induction by reductio ad absurdum.
The reason I am asking you the above questions, asking if you in fact know how to dissolve the concepts of “existence,” “dream like illusion,” “outside world,” “experience” and similar words, because I think almost all people who even have a concept of solipsism are not actually referring to a well-formed mathematical entity.
I have given extensive thought along with three other strong bayesians about related concepts and we have actually gotten somewhere interesting on dissolving the above phrases, alas the message-length is nontrivial.
So all in all, trying to disprove Solomonof Induction cannot be done because Solipsism isn’t a well-formed mathematical entity, whereas the Induction is.
Additionally, the universe is simpler than your mind. Because it unfolds form mathematically simple statements into a very large thing. It is like a 1000000px by 1000000px picture of the Mandelbrot fractal, the computer programme that creates it is significantly smaller than the image itself. A portrait of a man taken with a high-definition camera has no similar compression. Even though the fractal is vastly larger, it’s complexity is vastly smaller.
And this is where you first and foremost go astray in your simple arhgument: (My Mind) < (My Mind + Universe). Firstly because your mind is part of the Universe, so Universe + My Mind = Universe, and also because the universe consists of little more than the Schroedinger equation and some particle fields, but your mind consists either of the Universe + Appropriate Spatial Coordinate, or all the atom positions in your brain, or a long piece of AI-like code.
1) a Turing-computable set of physical laws and initial conditions that, run to completion, would produce a description of something uncannily similar to the universe as we know it, or
2) a Turing-computable set of physical laws and initial conditions that, run to completion, would produce a description of something uncannily similar to the universe as we know it, and then uniquely identify your brain within that description?
A program which produced a model of your mind but not of the rest of the universe would probably be even more complicated, since any and all knowledge of that universe encoded within your mind would need to be included with the initial program rather than emerging naturally.
Well, if the data is description of your mind then the code should produce a string that begins with description of your mind, somehow. Pulling the universe out will require examining the code’s internals.
If you do S.I. in a fuzzy manner there can be all sorts of self-misconceptions; it can be easier (shorter coding) to extract an incredibly important mind, therefore you obtain not solipsism but narcissism. The prior for self importance may then be quite huge.
If you drop requirement that output string begins with description of the mind and search for the data anywhere within the output string, then a simple counter will suffice as valid code.
Take a very huge prime. It is less complex than the list of all primes, you would say. The shortest code to generate such prime may be the correct prime finding program that prints Nth prime (it stores the N of which prime to print), if the prime is big enough.
My argument is not that that mental states/experiences/whatever are simple. The argument is: However complicated mental experiences might be, the following is true:
My mind is less complex than (my mind + the rest of the universe)
Even more so if i consider that the universerse entails a lot of minds that are as complex as my own.
By the way, i am not trying to prove solipsism, but to disprove solomonoff induction by reductio ad absurdum.
You, my good sir, need to improve your intuitions on Kolmogorov Complexity.
Case in point
The reason I am asking you the above questions, asking if you in fact know how to dissolve the concepts of “existence,” “dream like illusion,” “outside world,” “experience” and similar words, because I think almost all people who even have a concept of solipsism are not actually referring to a well-formed mathematical entity.
I have given extensive thought along with three other strong bayesians about related concepts and we have actually gotten somewhere interesting on dissolving the above phrases, alas the message-length is nontrivial.
So all in all, trying to disprove Solomonof Induction cannot be done because Solipsism isn’t a well-formed mathematical entity, whereas the Induction is.
Additionally, the universe is simpler than your mind. Because it unfolds form mathematically simple statements into a very large thing. It is like a 1000000px by 1000000px picture of the Mandelbrot fractal, the computer programme that creates it is significantly smaller than the image itself. A portrait of a man taken with a high-definition camera has no similar compression. Even though the fractal is vastly larger, it’s complexity is vastly smaller.
And this is where you first and foremost go astray in your simple arhgument: (My Mind) < (My Mind + Universe). Firstly because your mind is part of the Universe, so Universe + My Mind = Universe, and also because the universe consists of little more than the Schroedinger equation and some particle fields, but your mind consists either of the Universe + Appropriate Spatial Coordinate, or all the atom positions in your brain, or a long piece of AI-like code.
Which would be more complex:
1) a Turing-computable set of physical laws and initial conditions that, run to completion, would produce a description of something uncannily similar to the universe as we know it, or
2) a Turing-computable set of physical laws and initial conditions that, run to completion, would produce a description of something uncannily similar to the universe as we know it, and then uniquely identify your brain within that description?
A program which produced a model of your mind but not of the rest of the universe would probably be even more complicated, since any and all knowledge of that universe encoded within your mind would need to be included with the initial program rather than emerging naturally.
Well, if the data is description of your mind then the code should produce a string that begins with description of your mind, somehow. Pulling the universe out will require examining the code’s internals.
If you do S.I. in a fuzzy manner there can be all sorts of self-misconceptions; it can be easier (shorter coding) to extract an incredibly important mind, therefore you obtain not solipsism but narcissism. The prior for self importance may then be quite huge.
If you drop requirement that output string begins with description of the mind and search for the data anywhere within the output string, then a simple counter will suffice as valid code.
Take a very huge prime. It is less complex than the list of all primes, you would say. The shortest code to generate such prime may be the correct prime finding program that prints Nth prime (it stores the N of which prime to print), if the prime is big enough.