I have a hypothesis based on systems theory, but I don’t know how much sense it makes.
A system can only simulate a less complex system, not one at least as complex as itself. Therefore, human neurologists will never come up with a complete theory of the human mind, because they’ll not be able to think of it, i.e. the human brain cannot contain a complete model of itself. Even if collectively they get to understand all the parts, no single brain will be able to see the complete picture.
I think you may be missing a time factor. I’d agree with your statement if it was “A system can only simulate a less complex system in real-time.” As an example, designing the next generation of microprocessors can be done on current microprocessors, but simulation time often takes minutes or even hours to run a simulation of microseconds.
It’s my understanding that nobody understands every part of major modern-day engineering projects (e.g. the space shuttle, large operating systems) completely, and the brain seems more complex than those, so this is probably right. That said, we still have high-level theories describing those, so we’ll likely have high-level theories of the brain as well, allowing one to understand it in broad strokes if not in every detail.
A system can only simulate a less complex system, not one at least as complex as itself
It probably depends on what you mean by complexity. Surely a universal Turing machine can emulate any other universal Turing machine, given enough resources.
On the other side, neurological models of the brain need not to be as complex as the brain itself, since much of the complexity is probably accidental.
Seems unlikely, given the existence of things like quines), and the fact that self-reference comes pretty easily. I recommend reading Godel Escher Bach, it discusses your original question in the context of this sort of self-referential mathematics, and is also very entertaining.
Quines don’t say anything about human working memory limitations or the amount of time a human would require for learning to understand the whole system, and furthermore only talk about printing the source code not understanding it, so I’m not sure how they’re relevant for this.
I wouldn’t be too surprised if the hypothesis is true for unmodified humans, but for systems in general I expect it to be untrue. Whatever ‘understanding’ is, the diagonal lemma should be able to find a fixed point for it (or at the very least, an arbitrarily close approximation) - it would be very surprising if it didn’t hold. Quines are just an instance of this general principle that you can actually play with and poke around and see how they work—which helps demystify the core idea and gives you a picture of how this could be possible.
Unless from the beginning you create the system to accomplish a certain number of tasks and then work to create the system to complete them. That can mean creating systems and subroutines in order to accomplish a larger goal. Take stocking a store for example:
There are a few tasks to consider:
Price Changes
Presentation
Stocking product
Taking away old product (or excess)
A large store like Target has 8 different, loosely connected teams that accomplish these tasks. That is a store system within a building of 8 different subroutines to create a system that, if it works at its best, makes sure that the store is perfect stocked with the right presentation, amount of produce and the correct price. That system of 8 subroutines is back up by the 3 backroom subroutines that create the backroom system that take in product and make it available for stocking and that system is backed up by the distribution center system which is backed up by the transportation system (each truck and contractor working as a subroutine).
These systems and subroutines are created to accomplish one goal and that is to make sure that customers can find what they are looking for a buy it. I think using this idea we can start to create systems and subroutines that make it possible to replicate very complicated systems without losing anything.
I have a hypothesis based on systems theory, but I don’t know how much sense it makes.
A system can only simulate a less complex system, not one at least as complex as itself. Therefore, human neurologists will never come up with a complete theory of the human mind, because they’ll not be able to think of it, i.e. the human brain cannot contain a complete model of itself. Even if collectively they get to understand all the parts, no single brain will be able to see the complete picture.
Am I missing some crucial detail?
I think you may be missing a time factor. I’d agree with your statement if it was “A system can only simulate a less complex system in real-time.” As an example, designing the next generation of microprocessors can be done on current microprocessors, but simulation time often takes minutes or even hours to run a simulation of microseconds.
Institutions are bigger than humans.
Also the time thing.
The whole point of a theory is that it’s less complex than the system you want to model. You are always making some simplifications.
It’s my understanding that nobody understands every part of major modern-day engineering projects (e.g. the space shuttle, large operating systems) completely, and the brain seems more complex than those, so this is probably right. That said, we still have high-level theories describing those, so we’ll likely have high-level theories of the brain as well, allowing one to understand it in broad strokes if not in every detail.
It probably depends on what you mean by complexity. Surely a universal Turing machine can emulate any other universal Turing machine, given enough resources.
On the other side, neurological models of the brain need not to be as complex as the brain itself, since much of the complexity is probably accidental.
Seems unlikely, given the existence of things like quines), and the fact that self-reference comes pretty easily. I recommend reading Godel Escher Bach, it discusses your original question in the context of this sort of self-referential mathematics, and is also very entertaining.
Quines don’t say anything about human working memory limitations or the amount of time a human would require for learning to understand the whole system, and furthermore only talk about printing the source code not understanding it, so I’m not sure how they’re relevant for this.
I wouldn’t be too surprised if the hypothesis is true for unmodified humans, but for systems in general I expect it to be untrue. Whatever ‘understanding’ is, the diagonal lemma should be able to find a fixed point for it (or at the very least, an arbitrarily close approximation) - it would be very surprising if it didn’t hold. Quines are just an instance of this general principle that you can actually play with and poke around and see how they work—which helps demystify the core idea and gives you a picture of how this could be possible.
Unless from the beginning you create the system to accomplish a certain number of tasks and then work to create the system to complete them. That can mean creating systems and subroutines in order to accomplish a larger goal. Take stocking a store for example:
There are a few tasks to consider:
Price Changes
Presentation
Stocking product
Taking away old product (or excess)
A large store like Target has 8 different, loosely connected teams that accomplish these tasks. That is a store system within a building of 8 different subroutines to create a system that, if it works at its best, makes sure that the store is perfect stocked with the right presentation, amount of produce and the correct price. That system of 8 subroutines is back up by the 3 backroom subroutines that create the backroom system that take in product and make it available for stocking and that system is backed up by the distribution center system which is backed up by the transportation system (each truck and contractor working as a subroutine).
These systems and subroutines are created to accomplish one goal and that is to make sure that customers can find what they are looking for a buy it. I think using this idea we can start to create systems and subroutines that make it possible to replicate very complicated systems without losing anything.