I am not surprised when a video game character consistently summons balls of fire out of nothingness. I would be absolutely astounded to see an actual person do this. This is because the system of rules governing a video game and the system governing a deterministic universe appear to be very, very different.
If we were living in the matrix, this would not be the case. It would not mean that we are necessarily in the kind of video game where there are psychic powers, but it would provide a very clear mechanism through which psychic powers could act. Such a mechanism does not appear possible in a deterministic universe, or at least in the one we seem to occupy.
Real world is uncaring, unsupervised. Magic is not just about the world being “complex”, it’s about the world containing mechanisms targeting specifically humans, and understanding the situation much like a human would. Being “deterministic” doesn’t preclude anything, it’s more of a way of seeing things than the way things are.
I don’t think so. Video games are specifically programmed to create a particular experience for the user. If something goes over the horizon and won’t be needed again, it just doesn’t get computed. Whereas the real universe seems to be—just the same physics. Everywhere. No complicated ad hoc programming describing levels or characters or points, or translating keypresses into useful actions—no user input at all, come to think of it.
If something goes over the horizon and won’t be needed again, it just doesn’t get computed. Whereas the real universe seems to be—just the same physics. Everywhere.
Not quite. That’s what we assume happens—justifiably! -- because it would be a far more complicated hypothesis to disbelieve in the implied invisible.
However, failing to see these implied invisibles is not itself independent evidence of universal law, just an inference from an Occamian prior. You would fail to see implied invisibles with equal probability whether or not the laws were fully universal.
Interestingly, I explored the question of whether it’s possible, if the universe is a simulation, to shut it down by forcing it to do more and more computational work in order to keep fooling us. But, I argue, it turns out that the 2nd law of thermodynamics implies that no matter what observations observers choose to make, it requires no more storage capacity to continue fooling them.
But, I argue, it turns out that the 2nd law of thermodynamics implies that no matter what observations observers choose to make, it requires no more storage capacity to continue fooling them.
I read this, but I’m a little confused. Conceptually, as a closed system, the demand of universe is constant, sure, when I imagine it as something like the game of Life. Are you assuming that any simulator will be a full and perfect emulator, with no optimizations like caches?
Because if optimizations are applied, then it seems you can expand the necessary power by doing things that defeat the optimizations. Caches are ineffective if you keep generating intricately linked cryptographic junk, etc. One might think that no simulating agent would run a simulator whose worst-case requirements are beyond its abilities; but then, we humans routinely use QuickSort and don’t mind our kernels over-committing memory...
(Incidentally, I made an estimation of my own for how small our substrate could be: http://www.gwern.net/Simulation%20inferences.html . I concluded that the simulating computer could be as small as a Planck cube.)
Are you assuming that any simulator will be a full and perfect emulator, with no optimizations like caches?
It doesn’t rely on that assumption. It’s just based on the fact that any time you destroy entropy by forcing some system, from your perspective, to be in fewer possible states, you also allow another system, from your perspective, to be in proportionally more possible states.
The more states something could be in, from your perspective, the less information the simulator has to store to consistently represent it for you.
I vaguely see what you’re getting at—every observation or interaction forces the simulator to calculate what you see, but also allows it to cheat in other areas. But I’m not sure how exactly this would work on the level of bits and programs?
At the level you’re asking about (if I understand you correctly), the program can just reallocate the memory for whatever gained entropy, to whatever lost entropy.
Like in the comments section of my blog, if you learn the location of a ball, the program now has to store it as being in a definite location, but I also powered my brain to learn that, so the program doesn’t have to be as precise in storing information about chemical bonds, which were moved to a higher entropy state.
Spoken like a true theoretician. But it’s hard to see an implementation that is optimal in exploiting this memory bound.
I mean, imagine that we have a pocket universe where we can have many numbers (particles?) which all must add up to 1000, and we have your normal programming types like bit, byte/int, integer etc.
If we start out with 1 1000, and then the ‘laws of physics’ begins dividing it by 10, (giving us 10 100s), how is the simulator going to be smart enough to take its fixed section of RAM and rewrite the single large 1000 integer into 10 smaller ints, and so on down to 1000 1s which could be single bits?
Is there any representation of the universe’s state which achieves these tricks automatically, or does the simulation really just have to include all sorts of conditionals like ‘if (changed? x), then if x > 128, convert x Integer; x ⇐ 128 && > 1, convert x int; else convert x bit’ in order to preserve the constant-memory usage?
I don’t think this hypothetical universe is comparable in the relevant ways: it must be capable of representing the concept of an observer, and what that observer knows (has mutual information with), and adhere to the 2nd law of thermodynamics. Which I don’t think is the case here.
No, that’s the point Jack brought up. I was only discussing the issues that arise in the hypothetical scenario in which the universe is simulated in an “overworld” and must successfully continue to fool us.
You make an interesting observation. I’m still trying to think it through, so I might not yet be making sense. But, right now, I have the following difficulty with accepting your argument.
Any simulation has “true” physical laws. These are just the rules that govern how in fact the simulation’s algorithm unfolds, including all optimizations, etc.
However, we expect, a priori, the ultimate laws of reality to satisfy certain invariances. For example, perhaps we expect the ultimate laws to work identically at different points in real physical space. The true laws of the simulation might not satisfy such invariances with respect to the simulation. For example, the simulation’s laws might not work identically at different points in the simulated physical space. [ETA: Optimization makes this likely. The simulation could evolve in a “chunkier” way far from us than it does close to us.]
So maybe this is how we can define what it means to hide the simulated nature of our universe from us: “Hiding the simulation” means “making our universe appear to us as though its laws satisfy all the expected invariances, even though they don’t”.
Here’s the issue that I hope you address:
I’m convinced by your argument that “any time you destroy entropy by forcing some system, from your perspective, to be in fewer possible states, you also allow another system, from your perspective, to be in proportionally more possible states.”
Say that, when I start out, system A could be in any one of the states in some state-set X. Then I learn about system B, and so, as you point out, system A could now be in any one of the states in some larger state-set Y, as far as I know.
But what if the larger state-set Y includes states that do not obey the expected invariances? And what if, as I learn more about the universe, the state-set that A’s state must be in grows, all right, but eventually consists almost entirely of states that violate our expected invariances?
Wouldn’t that amount to discovering the simulated nature of our universe? To avoid this discovery, wouldn’t the simulators have to put more resources into making sure that A’s set of possible states includes enough states that obey the expected invariances?
Good point—I’ve struggled with the same problem, in different terms. Let me know if my statement of the problem matches the point you’re making here:
“It’s possible to discover, not just particulars about individual systems, but universal laws. These universal laws put a constraint on all future observations, thus reducing the subjective entropy of the universe, without (apparently) needing any corresponding gain of entropy.”
It’s something I was wondering about when going over the E. T. Jaynes papers and Yudkowsky’s Engines of Cognition.
I haven’t gotten it resolved in terms of 2nd law and the “subjective entropy” idea, but I think I know how to resolve it in the context of the simulated universe question: basically, if the simulation starts out adhering to the invariances that have to be obeyed (even though they might be more than necessary to fool observers), then it is no additional burden for the observers to notice these invariances.
Though the observers have (apparently) violated the 2nd law—and this is an area for further research—the simulator was already expending the computational resources necessary to make the invariances hold. It is an exception to the general principle I derived, in that it’s a case where net destruction of entropy requires no additional RAM.
I’m still working on how to resolve the remaining problems, but it shows how discovery of universal physical laws needn’t be a problem for the simulator.
I’ll try to bring your solution back to thermodynamics terms:
The universe always has and always will obey certain invariances, and those are a redundancy in your observations, which (along with any other redundancy that could possibly be derived) is already taken into account when computing information-theoretic entropy. If you had plenty of data already to derive the invariance but just hadn’t previously noticed it, that lack of logical omniscience is why the 2nd law is an inequality. Including the invariance into your future predictions isn’t a net reduction in entropy. It just removes some of the slack between the exact phase-volume preserving transforms of physics and the upper bounds that a computationally bounded agent has to use.
Your restatement looks exactly right, and your solution would resolve the issue I raised.
One question is, how much optimization can the simulators do if the true laws are as invariant as they “ought to be”? For example, if the universe has to evolve according to the same rules everywhere, that would seem to keep it from evolving in a chunkier way far away from us, which closes off a potential way to save on computation.
The simulator can maintain conservation of e.g. mass, while not churning through the computations required for e.g. gravity until people see enough that they can check if gravity isn’t holding.
This would save on having to do the gravity calculations. Then, when people, armed with their knowledge of gravity, start looking in more places, the universe must pick a configuration and stick with it—but at that point, all of their observations have the original problem of freeing up memory somewhere else in the form of higher entropy.
On second thought, that doesn’t work either, since discovery of gravitational laws will constrain their existing predictions of where the planets will be, and this destruction of entropy is unrelated to the entropy needed to create it, which was your objection to begin with.
My best guess at this point is that any resolution will ultimately hinge on a finer-grained information-theoretic analysis of the discovery of universal laws. That is, as you gain evidence pointing to the validity of laws you notice, you assign a high-but-not-unity probability to the laws continuing to hold. Each time your probability goes up, that corresponds to a particular reduction in the entropy of your probability distribution.
But, as they say, “to make inferences you have to make assumptions”. There is some entropic cost to making the assumptions necessary for the model with invariants to work, and this must be properly accounted for. I’ll continue to research this.
This would save on having to do the gravity calculations. Then, when people, armed with their knowledge of gravity, start looking in more places, the universe must pick a configuration and stick with it—but at that point, all of their observations have the original problem of freeing up memory somewhere else in the form of higher entropy.
This is wrong (even assuming that previous coarse-grained observations don’t matter). If you are changing the model by refining it, choosing one option of more detailed data arbitrarily, then this process on the world-model isn’t reversible: you can’t “un-choose” that arbitrary data and remain able to reconstruct it (unless the data is not arbitrary after all and only depends on the world model that is already there). As a result, no magical increase in entropy occurs, and no resources get saved: it’s not an operation on the subsystems within the modeled world, it’s an operation on the system of whole-world model within the world of modelers.
Also, consider the fact that ultimate laws can never be discovered, strictly speaking: there will always be uncertainty, and maybe there won’t even be asymptotically certain candidates, only turtles always deeper and deeper.
When I was first introduced to quantum mechanics my professor taught us the Copenhagen Interpretation. I was immediately reminded of occasional moments in video games where features of a room aren’t run until the player gets to the room. It seemed to me that only collapsing the wave function when it interacted with a particular kind of physical system (or a conscious system!) would be a really good way to conserve computing power and that it seemed like the kind of hack programmers in an fully Newtonian universe might use to approximate their universe without having to calculate the trajectories of a googolplex (ed) subatomic particles.
Can anyone tell me if this actually would save computing power/memory?
The answer basically comes down to the issue of saving on RAM vs. saving on ROM. (RAM = amount of memory need to implement the algorithm, ROM = amount of memory needed to describe the algorithm)
Video game programmers have to care about RAM, while the universe (in its capacity as a simulator) does not. That’s why programmers generate only what they have to, while the universe can afford to just compute everything.
However, I asked the same question, which is what led to the blog post linked above, where I concluded that you wouldn’t save memory by only doing the computations for things observers look at: first, because they check for consistency and come back to verify that the laws of physics still work, forcing you to generate the object twice.
But more importantly (as I mentioned) because the 2nd law of thermodynamics means that any time you gain information about something in the universe, you necessarily lose just as much in the process of making that observation (for a human, it takes the form of e.g. waste heat, higher-entropy decomposition of fuels). So by learning about the universe through observation, you simultaneously relieve it of having to store at least as much information (about e.g. subatomic particles).
(This argument has not been peer-reviewed, but was based on Yudkowsky’s Engines of Cognition post.)
Assuming they don’t make any approximations other than collapse, yes a classical computer simulating Copenhagen takes fewer arithmetic ops than simulating MWI. At least until someone in the simulation builds a sufficiently large coherent system (quantum computer), at which point the simulator has to choose between forbidding it (i.e. breaking the approximation guarantee) or spending exponentially many arithmetic ops.
Copenhagen (even in the absence of large coherent subsystems) does not take significantly less memory than MWI: both are in PSPACE.
Otoh, if the simulator is running on quantum-like physics too, then there’s no asymptotic difference in arithmetic either. And if you’re not going to assume that the simulator’s physics is similar to ours, who says it’s less rather than more computationally capable?
It’s truly sad now how people are less familiar with the original spelling and meaning of a googol. Now the first thing we think of is the search engine, instead of 10^100.
If you implemented the laws of physics on a computer, using lazy evaluation, then whatever is “over the horizon” from the observer process(es) would not be computed.
However, this would not in the least be observable from inside the system. If the observer moved to serve you, your past would be “retroactively” computed.
I’m not claiming this is very likely to be the case, since at the very least it requires an additional agent—the observer process—to cause anything to happen at all, but lazy evaluation isn’t some weird ad-hoc concept; it’s a basic concept in computer science that also happens to make programs shorter, a lot of the time.
Hopefully not sufficiently shorter that a universe using lazy evaluation with one random point in space somewhere as the observer is less complex than one using strict evaluation. That.. would be impossible for us to detect, of course, but I believe it’d still have consequences.
If the universe we’re living in is a work of art or a game, it’s made for minds with much greater processing power than we’ve got. It isn’t obvious that they’d be satisfied with something as crude as a video game.
How about a video game where you attempt to control a pre-singularity global civilization by directly playing a few thousand randomly selected humans simultaneously, while not letting this fact be noticed by the NPCs?
It’s interesting to wonder what sort of games post-humans might play, though I hope it won’t be anything quite that ethically objectionable.
If you can understand how the two are truly the same, you are far wiser than anyone I’ve ever met, and I would very much like to subscribe to your newsletter. I hope thefirst issue explains how this dichotomy is invalid.
A video game can be deterministic or not in the same way any other kind of universe can. “Video game” vs “deterministic” is just a silly comparison. I don’t know what word to use in place of ‘deterministic’, I just don’t think that one is the right one.
I’m thinking “algorithmic”. That is, the universe, or a video game, follows a certain algorithm to determine what happens next, whether the algorithm is the laws of physics or a computer program. Algorithms aren’t necessarily deterministic: we could have a step for “generate a truly random (quantum) number”.
There is, to my knowledge, exactly zero evidence indicating that the creation and execution of the laws governing the universe resembles that of video games in any way. There’s some sense in which the term “system” applies to both, I admit, but that’s about it, and “system” is a pretty broad word.
I am not surprised when a video game character consistently summons balls of fire out of nothingness. I would be absolutely astounded to see an actual person do this. This is because the system of rules governing a video game and the system governing a deterministic universe appear to be very, very different.
If we were living in the matrix, this would not be the case. It would not mean that we are necessarily in the kind of video game where there are psychic powers, but it would provide a very clear mechanism through which psychic powers could act. Such a mechanism does not appear possible in a deterministic universe, or at least in the one we seem to occupy.
Real world is uncaring, unsupervised. Magic is not just about the world being “complex”, it’s about the world containing mechanisms targeting specifically humans, and understanding the situation much like a human would. Being “deterministic” doesn’t preclude anything, it’s more of a way of seeing things than the way things are.
An artificial dichotomy.
I don’t think so. Video games are specifically programmed to create a particular experience for the user. If something goes over the horizon and won’t be needed again, it just doesn’t get computed. Whereas the real universe seems to be—just the same physics. Everywhere. No complicated ad hoc programming describing levels or characters or points, or translating keypresses into useful actions—no user input at all, come to think of it.
Not quite. That’s what we assume happens—justifiably! -- because it would be a far more complicated hypothesis to disbelieve in the implied invisible.
However, failing to see these implied invisibles is not itself independent evidence of universal law, just an inference from an Occamian prior. You would fail to see implied invisibles with equal probability whether or not the laws were fully universal.
Interestingly, I explored the question of whether it’s possible, if the universe is a simulation, to shut it down by forcing it to do more and more computational work in order to keep fooling us. But, I argue, it turns out that the 2nd law of thermodynamics implies that no matter what observations observers choose to make, it requires no more storage capacity to continue fooling them.
I read this, but I’m a little confused. Conceptually, as a closed system, the demand of universe is constant, sure, when I imagine it as something like the game of Life. Are you assuming that any simulator will be a full and perfect emulator, with no optimizations like caches?
Because if optimizations are applied, then it seems you can expand the necessary power by doing things that defeat the optimizations. Caches are ineffective if you keep generating intricately linked cryptographic junk, etc. One might think that no simulating agent would run a simulator whose worst-case requirements are beyond its abilities; but then, we humans routinely use QuickSort and don’t mind our kernels over-committing memory...
(Incidentally, I made an estimation of my own for how small our substrate could be: http://www.gwern.net/Simulation%20inferences.html . I concluded that the simulating computer could be as small as a Planck cube.)
It doesn’t rely on that assumption. It’s just based on the fact that any time you destroy entropy by forcing some system, from your perspective, to be in fewer possible states, you also allow another system, from your perspective, to be in proportionally more possible states.
The more states something could be in, from your perspective, the less information the simulator has to store to consistently represent it for you.
I vaguely see what you’re getting at—every observation or interaction forces the simulator to calculate what you see, but also allows it to cheat in other areas. But I’m not sure how exactly this would work on the level of bits and programs?
This is a very conceptually interesting question.
Bah! Implementation issue! :-P
At the level you’re asking about (if I understand you correctly), the program can just reallocate the memory for whatever gained entropy, to whatever lost entropy.
Like in the comments section of my blog, if you learn the location of a ball, the program now has to store it as being in a definite location, but I also powered my brain to learn that, so the program doesn’t have to be as precise in storing information about chemical bonds, which were moved to a higher entropy state.
Spoken like a true theoretician. But it’s hard to see an implementation that is optimal in exploiting this memory bound.
I mean, imagine that we have a pocket universe where we can have many numbers (particles?) which all must add up to 1000, and we have your normal programming types like bit, byte/int, integer etc.
If we start out with 1 1000, and then the ‘laws of physics’ begins dividing it by 10, (giving us 10 100s), how is the simulator going to be smart enough to take its fixed section of RAM and rewrite the single large 1000 integer into 10 smaller ints, and so on down to 1000 1s which could be single bits?
Is there any representation of the universe’s state which achieves these tricks automatically, or does the simulation really just have to include all sorts of conditionals like ‘if (changed? x), then if x > 128, convert x Integer; x ⇐ 128 && > 1, convert x int; else convert x bit’ in order to preserve the constant-memory usage?
I don’t think this hypothetical universe is comparable in the relevant ways: it must be capable of representing the concept of an observer, and what that observer knows (has mutual information with), and adhere to the 2nd law of thermodynamics. Which I don’t think is the case here.
Wait, there has to be an observer? I thought you were really just talking about entangled wave-functions etc.
No, that’s the point Jack brought up. I was only discussing the issues that arise in the hypothetical scenario in which the universe is simulated in an “overworld” and must successfully continue to fool us.
You make an interesting observation. I’m still trying to think it through, so I might not yet be making sense. But, right now, I have the following difficulty with accepting your argument.
Any simulation has “true” physical laws. These are just the rules that govern how in fact the simulation’s algorithm unfolds, including all optimizations, etc.
However, we expect, a priori, the ultimate laws of reality to satisfy certain invariances. For example, perhaps we expect the ultimate laws to work identically at different points in real physical space. The true laws of the simulation might not satisfy such invariances with respect to the simulation. For example, the simulation’s laws might not work identically at different points in the simulated physical space. [ETA: Optimization makes this likely. The simulation could evolve in a “chunkier” way far from us than it does close to us.]
So maybe this is how we can define what it means to hide the simulated nature of our universe from us: “Hiding the simulation” means “making our universe appear to us as though its laws satisfy all the expected invariances, even though they don’t”.
Here’s the issue that I hope you address:
I’m convinced by your argument that “any time you destroy entropy by forcing some system, from your perspective, to be in fewer possible states, you also allow another system, from your perspective, to be in proportionally more possible states.”
Say that, when I start out, system A could be in any one of the states in some state-set X. Then I learn about system B, and so, as you point out, system A could now be in any one of the states in some larger state-set Y, as far as I know.
But what if the larger state-set Y includes states that do not obey the expected invariances? And what if, as I learn more about the universe, the state-set that A’s state must be in grows, all right, but eventually consists almost entirely of states that violate our expected invariances?
Wouldn’t that amount to discovering the simulated nature of our universe? To avoid this discovery, wouldn’t the simulators have to put more resources into making sure that A’s set of possible states includes enough states that obey the expected invariances?
Good point—I’ve struggled with the same problem, in different terms. Let me know if my statement of the problem matches the point you’re making here:
“It’s possible to discover, not just particulars about individual systems, but universal laws. These universal laws put a constraint on all future observations, thus reducing the subjective entropy of the universe, without (apparently) needing any corresponding gain of entropy.”
It’s something I was wondering about when going over the E. T. Jaynes papers and Yudkowsky’s Engines of Cognition.
I haven’t gotten it resolved in terms of 2nd law and the “subjective entropy” idea, but I think I know how to resolve it in the context of the simulated universe question: basically, if the simulation starts out adhering to the invariances that have to be obeyed (even though they might be more than necessary to fool observers), then it is no additional burden for the observers to notice these invariances.
Though the observers have (apparently) violated the 2nd law—and this is an area for further research—the simulator was already expending the computational resources necessary to make the invariances hold. It is an exception to the general principle I derived, in that it’s a case where net destruction of entropy requires no additional RAM.
I’m still working on how to resolve the remaining problems, but it shows how discovery of universal physical laws needn’t be a problem for the simulator.
I’ll try to bring your solution back to thermodynamics terms:
The universe always has and always will obey certain invariances, and those are a redundancy in your observations, which (along with any other redundancy that could possibly be derived) is already taken into account when computing information-theoretic entropy. If you had plenty of data already to derive the invariance but just hadn’t previously noticed it, that lack of logical omniscience is why the 2nd law is an inequality. Including the invariance into your future predictions isn’t a net reduction in entropy. It just removes some of the slack between the exact phase-volume preserving transforms of physics and the upper bounds that a computationally bounded agent has to use.
Your restatement looks exactly right, and your solution would resolve the issue I raised.
One question is, how much optimization can the simulators do if the true laws are as invariant as they “ought to be”? For example, if the universe has to evolve according to the same rules everywhere, that would seem to keep it from evolving in a chunkier way far away from us, which closes off a potential way to save on computation.
The simulator can maintain conservation of e.g. mass, while not churning through the computations required for e.g. gravity until people see enough that they can check if gravity isn’t holding.
This would save on having to do the gravity calculations. Then, when people, armed with their knowledge of gravity, start looking in more places, the universe must pick a configuration and stick with it—but at that point, all of their observations have the original problem of freeing up memory somewhere else in the form of higher entropy.
On second thought, that doesn’t work either, since discovery of gravitational laws will constrain their existing predictions of where the planets will be, and this destruction of entropy is unrelated to the entropy needed to create it, which was your objection to begin with.
My best guess at this point is that any resolution will ultimately hinge on a finer-grained information-theoretic analysis of the discovery of universal laws. That is, as you gain evidence pointing to the validity of laws you notice, you assign a high-but-not-unity probability to the laws continuing to hold. Each time your probability goes up, that corresponds to a particular reduction in the entropy of your probability distribution.
But, as they say, “to make inferences you have to make assumptions”. There is some entropic cost to making the assumptions necessary for the model with invariants to work, and this must be properly accounted for. I’ll continue to research this.
This is wrong (even assuming that previous coarse-grained observations don’t matter). If you are changing the model by refining it, choosing one option of more detailed data arbitrarily, then this process on the world-model isn’t reversible: you can’t “un-choose” that arbitrary data and remain able to reconstruct it (unless the data is not arbitrary after all and only depends on the world model that is already there). As a result, no magical increase in entropy occurs, and no resources get saved: it’s not an operation on the subsystems within the modeled world, it’s an operation on the system of whole-world model within the world of modelers.
Also, consider the fact that ultimate laws can never be discovered, strictly speaking: there will always be uncertainty, and maybe there won’t even be asymptotically certain candidates, only turtles always deeper and deeper.
When I was first introduced to quantum mechanics my professor taught us the Copenhagen Interpretation. I was immediately reminded of occasional moments in video games where features of a room aren’t run until the player gets to the room. It seemed to me that only collapsing the wave function when it interacted with a particular kind of physical system (or a conscious system!) would be a really good way to conserve computing power and that it seemed like the kind of hack programmers in an fully Newtonian universe might use to approximate their universe without having to calculate the trajectories of a googolplex (ed) subatomic particles.
Can anyone tell me if this actually would save computing power/memory?
The answer basically comes down to the issue of saving on RAM vs. saving on ROM. (RAM = amount of memory need to implement the algorithm, ROM = amount of memory needed to describe the algorithm)
Video game programmers have to care about RAM, while the universe (in its capacity as a simulator) does not. That’s why programmers generate only what they have to, while the universe can afford to just compute everything.
However, I asked the same question, which is what led to the blog post linked above, where I concluded that you wouldn’t save memory by only doing the computations for things observers look at: first, because they check for consistency and come back to verify that the laws of physics still work, forcing you to generate the object twice.
But more importantly (as I mentioned) because the 2nd law of thermodynamics means that any time you gain information about something in the universe, you necessarily lose just as much in the process of making that observation (for a human, it takes the form of e.g. waste heat, higher-entropy decomposition of fuels). So by learning about the universe through observation, you simultaneously relieve it of having to store at least as much information (about e.g. subatomic particles).
(This argument has not been peer-reviewed, but was based on Yudkowsky’s Engines of Cognition post.)
Assuming they don’t make any approximations other than collapse, yes a classical computer simulating Copenhagen takes fewer arithmetic ops than simulating MWI. At least until someone in the simulation builds a sufficiently large coherent system (quantum computer), at which point the simulator has to choose between forbidding it (i.e. breaking the approximation guarantee) or spending exponentially many arithmetic ops.
Copenhagen (even in the absence of large coherent subsystems) does not take significantly less memory than MWI: both are in PSPACE.
Otoh, if the simulator is running on quantum-like physics too, then there’s no asymptotic difference in arithmetic either. And if you’re not going to assume that the simulator’s physics is similar to ours, who says it’s less rather than more computationally capable?
googleplex = Google Inc’s HQ
googolplex = 10^(10^100)
It’s truly sad now how people are less familiar with the original spelling and meaning of a googol. Now the first thing we think of is the search engine, instead of 10^100.
Is that really so sad? googol was named in jest and I do not think I have ever seen it seriously needed for anything; Google on the other hand...
If you implemented the laws of physics on a computer, using lazy evaluation, then whatever is “over the horizon” from the observer process(es) would not be computed.
However, this would not in the least be observable from inside the system. If the observer moved to serve you, your past would be “retroactively” computed.
I’m not claiming this is very likely to be the case, since at the very least it requires an additional agent—the observer process—to cause anything to happen at all, but lazy evaluation isn’t some weird ad-hoc concept; it’s a basic concept in computer science that also happens to make programs shorter, a lot of the time.
Hopefully not sufficiently shorter that a universe using lazy evaluation with one random point in space somewhere as the observer is less complex than one using strict evaluation. That.. would be impossible for us to detect, of course, but I believe it’d still have consequences.
If the universe we’re living in is a work of art or a game, it’s made for minds with much greater processing power than we’ve got. It isn’t obvious that they’d be satisfied with something as crude as a video game.
How about a video game where you attempt to control a pre-singularity global civilization by directly playing a few thousand randomly selected humans simultaneously, while not letting this fact be noticed by the NPCs?
It’s interesting to wonder what sort of games post-humans might play, though I hope it won’t be anything quite that ethically objectionable.
Or, from the perspective of a pre-post-human, quite that dull. If I am going to play that kind of sim I’m going to pick the ‘elves’ faction.
Considering that there exist fork-lift simulation games, I hesitate to claim that anything is too dull to be made.
You’re serious? That scares me.
I think it was originally meant for training, but yes. People play it. As a game.
http://www.youtube.com/watch?v=HIVFjtZzDr8
It could be that it was the elves who picked the ‘humans’ faction.
If you can understand how the two are truly the same, you are far wiser than anyone I’ve ever met, and I would very much like to subscribe to your newsletter. I hope thefirst issue explains how this dichotomy is invalid.
A video game can be deterministic or not in the same way any other kind of universe can. “Video game” vs “deterministic” is just a silly comparison. I don’t know what word to use in place of ‘deterministic’, I just don’t think that one is the right one.
I’m thinking “algorithmic”. That is, the universe, or a video game, follows a certain algorithm to determine what happens next, whether the algorithm is the laws of physics or a computer program. Algorithms aren’t necessarily deterministic: we could have a step for “generate a truly random (quantum) number”.
Just plain, “no.”
There is, to my knowledge, exactly zero evidence indicating that the creation and execution of the laws governing the universe resembles that of video games in any way. There’s some sense in which the term “system” applies to both, I admit, but that’s about it, and “system” is a pretty broad word.
You mean, besides the predictive power of the mathematical formalizations of Occam’s Razor, as opposed to a linguistic or pathetic formulation?
The universe looks very falsifiably like a computer program.