if we’re in a simulation[1], i thought of a possible glitch that could be tested for: irrational numbers are infinite, but it’s impossible to store an infinite length (at least under our universe’s physical laws). a program can specify an equation that produces an irrational number (like human-built programs do), but, when actually applying that (e.g in a physics engine), it needs to approximate at some point. the test: measure something in the physical world, which should involve an irrational, in a way that’s incredibly precise (beyond what we can currently do). if the measurement is ever perfectly precise, as in, it can’t become more precise to show more decimal places, then this means we’re in a simulation which approximated the application of an irrational.
A way for you to understand the issues with the simulation argument is that it assumes the additional existence of things (eg, a supercomputer, a civ that built that supercomputer, etc). It takes a huge a priori credence cost (extreme solomonoff complexity of its description length) and can be dismissed instantly. Additionally, even if was on par in a priori credence with the reality argument, it’s still dismissed because it’s better to be wrong as a simulation that thinks it’s real than to be wrong as a reality who thinks he’s simulated. The later infinitely worse than the former.
Even more simply, simulationism is just creationism for the 21st century, it’s just the wrong kind of creationism. (I’m a Christian so I’m sure you can see how sad I find the simulationists).
How do you test whether a measurement is perfectly precise? All real-world measurements have errors and imprecision, and every interval includes infinitely many numbers with finite representations and those with no finite representation in pretty much every nontrivial representation system. Our ability to distinguish between real-valued measurements is generally extremely poor in comparison with the density of numbers you can represent even in 64 bits, let alone the more than a trillion bits that might be employed in some hypothetical computer capable of simulating our universe.
Also note that many irrational numbers can be stored and exact arithmetic done on them within some bounded number of bits, though for any representation system there will always be numbers (including rational numbers!) that cannot. This doesn’t have real effect on your argument, but I thought that it might be useful to mention.
if we’re in a simulation[1], i thought of a possible glitch that could be tested for: irrational numbers are infinite, but it’s impossible to store an infinite length (at least under our universe’s physical laws). a program can specify an equation that produces an irrational number (like human-built programs do), but, when actually applying that (e.g in a physics engine), it needs to approximate at some point. the test: measure something in the physical world, which should involve an irrational, in a way that’s incredibly precise (beyond what we can currently do). if the measurement is ever perfectly precise, as in, it can’t become more precise to show more decimal places, then this means we’re in a simulation which approximated the application of an irrational.
specifically, one which didn’t care to prevent experimental confirmation of this
A way for you to understand the issues with the simulation argument is that it assumes the additional existence of things (eg, a supercomputer, a civ that built that supercomputer, etc). It takes a huge a priori credence cost (extreme solomonoff complexity of its description length) and can be dismissed instantly. Additionally, even if was on par in a priori credence with the reality argument, it’s still dismissed because it’s better to be wrong as a simulation that thinks it’s real than to be wrong as a reality who thinks he’s simulated. The later infinitely worse than the former.
Even more simply, simulationism is just creationism for the 21st century, it’s just the wrong kind of creationism. (I’m a Christian so I’m sure you can see how sad I find the simulationists).
Thanks for sharing your thoughts c:
How do you test whether a measurement is perfectly precise? All real-world measurements have errors and imprecision, and every interval includes infinitely many numbers with finite representations and those with no finite representation in pretty much every nontrivial representation system. Our ability to distinguish between real-valued measurements is generally extremely poor in comparison with the density of numbers you can represent even in 64 bits, let alone the more than a trillion bits that might be employed in some hypothetical computer capable of simulating our universe.
Also note that many irrational numbers can be stored and exact arithmetic done on them within some bounded number of bits, though for any representation system there will always be numbers (including rational numbers!) that cannot. This doesn’t have real effect on your argument, but I thought that it might be useful to mention.