… the computational irreducibility of most existing phenomena...
This part strikes me as the main weak point of the argument. Even if “most” computations, for some sense of “most”, are irreducible, the extremely vast majority of physical phenomena in our own universe are extremely reducible computationally, especially if we’re willing to randomly sample a trajectory when the system is chaotic.
Just looking around my room right now:
The large majority of the room’s contents are solid objects just sitting in place (relative to Earth’s surface), with some random thermal vibrations which a simulation would presumably sample.
Then there’s the air, which should be efficiently simulable with an adaptive grid method.
There’s my computer, which can of course be modeled very well as embedding a certain abstract computational machine, and when something in the environment violates that model a simulation could switch over to the lower level.
Of course the most complicated thing in the room is probably me, and I would require a whole complicated stack of software to simulate efficiently. But even with today’s relatively primitive simulation technology, multiscale modeling is a thriving topic of research: the dream is to e.g. use molecular dynamics to find reaction kinetics, then reaction kinetics at the cell-scale to find evolution rules for cells and signalling states, then cell and signalling evolution rules to simulate whole organs efficiently, etc.
Thanks for the comment. I mostly agree, but I think I use “reducible” in a stronger sense than you do (maybe I should have specified it then) : by reducible I mean something that would not entail any loss of information (like a lossless compression). In the examples you give, some information—that is considered negligeable—is deleted. The thing is that I think these details could still make a difference in the state of the system at time t+Δt, due to how sensitiveness to initial conditions in “real” systems can in the end make big differences. Thus, we have to somehow take into account a high level of detail, if not all detail. (I guess the question of whether it is possible or not to achieve a perfect amount of detail is another problem). If we don’t, the simulation would arguably not be accurate.
So, yes I agree that “the extremely vast majority of physical phenomena in our own universe” are reducible, but only in a weak sense that will make the simulation unable to reliably predict the future, and thus non-sims won’t be incentivized to build them.
I buy that, insofar as the use-case for simulation actually requires predicting the full state of chaotic systems far into the future. But our actual use-cases for simulation don’t generally require that. For instance, presumably there is ample incentive to simulate turbulent fluid dynamics inside a jet engine, even though the tiny eddies realized in any run of the physical engine will not exactly match the tiny eddies realized in any run of the simulated engine. For engineering applications, sampling from the distribution is usually fine.
From a theoretical perspective: the reason samples are usually fine for engineering purposes is because we want our designs to work consistently. If a design fails one in n times, then with very high probability it only takes O(n) random samples in order to find a case where the design fails, and that provides the feedback needed from the simulation.
More generally, insofar as a system is chaotic and therefore dependent on quantum randomness, the distribution is in fact the main thing I want to know, and I can get a reasonable look at the distribution by sampling from it a few times.
This part strikes me as the main weak point of the argument. Even if “most” computations, for some sense of “most”, are irreducible, the extremely vast majority of physical phenomena in our own universe are extremely reducible computationally, especially if we’re willing to randomly sample a trajectory when the system is chaotic.
Just looking around my room right now:
The large majority of the room’s contents are solid objects just sitting in place (relative to Earth’s surface), with some random thermal vibrations which a simulation would presumably sample.
Then there’s the air, which should be efficiently simulable with an adaptive grid method.
There’s my computer, which can of course be modeled very well as embedding a certain abstract computational machine, and when something in the environment violates that model a simulation could switch over to the lower level.
Of course the most complicated thing in the room is probably me, and I would require a whole complicated stack of software to simulate efficiently. But even with today’s relatively primitive simulation technology, multiscale modeling is a thriving topic of research: the dream is to e.g. use molecular dynamics to find reaction kinetics, then reaction kinetics at the cell-scale to find evolution rules for cells and signalling states, then cell and signalling evolution rules to simulate whole organs efficiently, etc.
Thanks for the comment. I mostly agree, but I think I use “reducible” in a stronger sense than you do (maybe I should have specified it then) : by reducible I mean something that would not entail any loss of information (like a lossless compression). In the examples you give, some information—that is considered negligeable—is deleted. The thing is that I think these details could still make a difference in the state of the system at time t+Δt, due to how sensitiveness to initial conditions in “real” systems can in the end make big differences. Thus, we have to somehow take into account a high level of detail, if not all detail. (I guess the question of whether it is possible or not to achieve a perfect amount of detail is another problem). If we don’t, the simulation would arguably not be accurate.
So, yes I agree that “the extremely vast majority of physical phenomena in our own universe” are reducible, but only in a weak sense that will make the simulation unable to reliably predict the future, and thus non-sims won’t be incentivized to build them.
I buy that, insofar as the use-case for simulation actually requires predicting the full state of chaotic systems far into the future. But our actual use-cases for simulation don’t generally require that. For instance, presumably there is ample incentive to simulate turbulent fluid dynamics inside a jet engine, even though the tiny eddies realized in any run of the physical engine will not exactly match the tiny eddies realized in any run of the simulated engine. For engineering applications, sampling from the distribution is usually fine.
From a theoretical perspective: the reason samples are usually fine for engineering purposes is because we want our designs to work consistently. If a design fails one in n times, then with very high probability it only takes O(n) random samples in order to find a case where the design fails, and that provides the feedback needed from the simulation.
More generally, insofar as a system is chaotic and therefore dependent on quantum randomness, the distribution is in fact the main thing I want to know, and I can get a reasonable look at the distribution by sampling from it a few times.