Using the hypothetical optimal output sensitive approximation algorithm, simulation requires ~O(MC) space and ~O(MCT) time.
For any NP problem of size n, imagine a universe of size N = O(2^n), in which computers try to verify all possible solutions in parallel (using time T/2 = O(n^p)) and then pass the first verified solution along to a single (M=1) observer (of complexity C = O(n^p)) who then repeats that verification (using time T/2 = O(n^p)).
Then simulate the observations, using your optimal (O(MCT) = O(n^{2p})) algorithm. Voila! You have the answer to your NP problem, and you obtained it with costs that were polynomial in time and space, so the problem was in P. Therefore NP is in P, so P=NP.
For any NP problem of size n, imagine a universe of size N = O(2^n), in which computers try to verify all possible solutions in parallel (using time T/2 = O(n^p)) and then pass the first verified solution along to a single (M=1) observer (of complexity C = O(n^p)) who then repeats that verification (using time T/2 = O(n^p)).
I never claimed “hypothetical optimal output sensitive approximation algorithms” are capable of universal emulation of any environment/turing machine using constant resources. The use of the term approximation should have informed you of that.
Computers are like brains and unlike simpler natural phenomena in the sense that they do not necessarily have very fast approximations at all scales (due to complexity of irreversibility), and the most efficient inference of one agent’s observations could require forward simulation of the recent history of other agents/computers in the system.
Today the total computational complexity of all computers in existence is not vastly larger than the total brain complexity, so it is still ~O(MCT).
Also, we should keep in mind that the simulator has direct access to our mental states.
Imagine the year is 2100 and you have access to a supercomputer that has ridiculous amount of computation, say 10^30 flops, or whatever. In theory you could use that machine to solve some NP problem—verifying the solution yourself, and thus proving to yourself that you don’t live in a simulation which uses less than 10^30 flops.
Of course, as the specific computation you performed presumably had no value to the simulator, the simulation could simply slightly override neural states in your mind, such that the specific input parameters you chose were instead changed to match a previous cached input/output pair.
For any NP problem of size n, imagine a universe of size N = O(2^n), in which computers try to verify all possible solutions in parallel (using time T/2 = O(n^p)) and then pass the first verified solution along to a single (M=1) observer (of complexity C = O(n^p)) who then repeats that verification (using time T/2 = O(n^p)).
Then simulate the observations, using your optimal (O(MCT) = O(n^{2p})) algorithm. Voila! You have the answer to your NP problem, and you obtained it with costs that were polynomial in time and space, so the problem was in P. Therefore NP is in P, so P=NP.
Dibs on the Millennium Prize?
I never claimed “hypothetical optimal output sensitive approximation algorithms” are capable of universal emulation of any environment/turing machine using constant resources. The use of the term approximation should have informed you of that.
Computers are like brains and unlike simpler natural phenomena in the sense that they do not necessarily have very fast approximations at all scales (due to complexity of irreversibility), and the most efficient inference of one agent’s observations could require forward simulation of the recent history of other agents/computers in the system.
Today the total computational complexity of all computers in existence is not vastly larger than the total brain complexity, so it is still ~O(MCT).
Also, we should keep in mind that the simulator has direct access to our mental states.
Imagine the year is 2100 and you have access to a supercomputer that has ridiculous amount of computation, say 10^30 flops, or whatever. In theory you could use that machine to solve some NP problem—verifying the solution yourself, and thus proving to yourself that you don’t live in a simulation which uses less than 10^30 flops.
Of course, as the specific computation you performed presumably had no value to the simulator, the simulation could simply slightly override neural states in your mind, such that the specific input parameters you chose were instead changed to match a previous cached input/output pair.