If you perfectly predict something (as Omega supposedly does), you must run a model on some hardware equivalent.
Nope!
For example, many programmers will be able to predict the output of this function without running it or walking through it mentally (which would require too much effort):
def predictable():
epiphenomenon = 0
for i in range(1000):
for j in range(1, 1000):
if epiphenomenon % j == i:
epiphenomenon += i * j
else:
epiphenomenon = j - epiphenomenon
epiphenomenon += 1
return 42
Ignoring the fact that this is a contrived edge case of disputable relevance to the Omega-predicting-human-decisions problem, there is still a model being run.
Why does it need to be a programmer? Why would non-programmers not be able to predict the output of this function with 100% accuracy?
What, then, is the difference in what a programmer does versus what a non-programmer does?
Clearly, the programmer has a more accurate mental model of what the function does and how it works and what its compiler (and the thing that runs the compiled code) or interpreter will do. Whether the function is “truly run” or “truly simulated” is at this point a metaphysical question similar to asking whether a mind is truly aware if you only write each of its computation steps using large amounts of small stones on the sand of an immense desert.
If you take a functional perspective, these are all equivalent:
f1 = 1+1, f2 = 2, f3 = 1+1+1-1
When you run, say, a Java Virtual Machine on various kinds of hardware, the fundamental chipset instructions it translates to may all be different, but the results are still equivalent.
When you upload your brain, you’d expect the hardware implementation to differ from your current wetware implementation—yet as long as the output is identical, as long as they are indistinguishable when put into black boxes, you probably wouldn’t mind (cf. Turing tests).
Now, when you go for a perfect correspondence between two functions, by necessity their compressed representation of the relevant parts of the function (relevant for the output) has to be isomorphic.
The example you provided reduces to “def predictable(): return 42”. These are the parts relevant to the output, “the components involved in that decision” (I should have stressed that more).
If you predict the output of predictable() perfectly, you are simulating the (compressed) components involved in that decision—or a functionally equivalent procedure—perfectly.
Nope!
For example, many programmers will be able to predict the output of this function without running it or walking through it mentally (which would require too much effort):
Ignoring the fact that this is a contrived edge case of disputable relevance to the Omega-predicting-human-decisions problem, there is still a model being run.
Why does it need to be a programmer? Why would non-programmers not be able to predict the output of this function with 100% accuracy?
What, then, is the difference in what a programmer does versus what a non-programmer does?
Clearly, the programmer has a more accurate mental model of what the function does and how it works and what its compiler (and the thing that runs the compiled code) or interpreter will do. Whether the function is “truly run” or “truly simulated” is at this point a metaphysical question similar to asking whether a mind is truly aware if you only write each of its computation steps using large amounts of small stones on the sand of an immense desert.
If you take a functional perspective, these are all equivalent:
f1 = 1+1, f2 = 2, f3 = 1+1+1-1
When you run, say, a Java Virtual Machine on various kinds of hardware, the fundamental chipset instructions it translates to may all be different, but the results are still equivalent.
When you upload your brain, you’d expect the hardware implementation to differ from your current wetware implementation—yet as long as the output is identical, as long as they are indistinguishable when put into black boxes, you probably wouldn’t mind (cf. Turing tests).
Now, when you go for a perfect correspondence between two functions, by necessity their compressed representation of the relevant parts of the function (relevant for the output) has to be isomorphic.
The example you provided reduces to “def predictable(): return 42”. These are the parts relevant to the output, “the components involved in that decision” (I should have stressed that more).
If you predict the output of predictable() perfectly, you are simulating the (compressed) components involved in that decision—or a functionally equivalent procedure—perfectly.