So basically, what you’re asking for is a finite-length procedure that will tell an irrational-number output from a finite-description-length output? The trouble is, there’s no such procedure, as long as you can have a turing machine big enough to fool the finite-length procedure.
If you knew the size of the machine, though, you might be able to establish efficiency constraints and do a test, though.
As for the physics, I agree, fundamental quantization is possible, if untested. Hence why I said things like “hypothesized-continuous.” Though once we start taking averages (the < > brackets), you can still have a superposition with any average—to get around that you’d need quantum amplitude to be quantized (possible).
you can still have a superposition with any average
Ok, now the hypothesized-continuous quantity isn’t so much voltage as quantum amplitude. Which actually is a rather better argument in the first place, so let’s run with that!
I would then ask, is there really a meaningful physical difference between the state A|1> + B|2>, and the state (A+epsilon)|1> + (B-epsilon)|2>? (Let’s hope the ket notation makes it through the Markdown. Anyway.) Observe that the rest of the universe actually interacts with the underlying pure states |1> and |2>; the amplitudes only change the probabilities of outcomes (in Copenhagen) or the measure of worlds (in MW). For sufficiently small epsilon it does not seem to me that either of these changes is actually observable by any entity, conscious or otherwise. In that case, as I say, I do not quite understand what it means to say that a physical process has “computed” epsilon. Perhaps a round of Taboo is in order?
So, what I think is that for some continuous output and any epsilon you care to name, one can construct a totally normal computer with resources 1/delta that can approximate the continuous output to within epsilon.
Proceeding from there, the more interesting question (and the most observable question) is more like the computational complexity question—does delta shrink faster or slower than epsilon? If it shrinks sufficiently faster for some class of continuous outputs, this means we can build a real-number based computer that goes faster than a classical computer with the same resources.
In this sense, quantum computers are already hypercomputers for being able to factor numbers efficiently, but they’re not quite what I mean. So let me amend that to a slightly stronger sense where the machine actually can output something that would take infinite time to compute classically, we just only care to within precision epsilon :P
So basically, what you’re asking for is a finite-length procedure that will tell an irrational-number output from a finite-description-length output? The trouble is, there’s no such procedure, as long as you can have a turing machine big enough to fool the finite-length procedure.
If you knew the size of the machine, though, you might be able to establish efficiency constraints and do a test, though.
As for the physics, I agree, fundamental quantization is possible, if untested. Hence why I said things like “hypothesized-continuous.” Though once we start taking averages (the < > brackets), you can still have a superposition with any average—to get around that you’d need quantum amplitude to be quantized (possible).
Ok, now the hypothesized-continuous quantity isn’t so much voltage as quantum amplitude. Which actually is a rather better argument in the first place, so let’s run with that!
I would then ask, is there really a meaningful physical difference between the state A|1> + B|2>, and the state (A+epsilon)|1> + (B-epsilon)|2>? (Let’s hope the ket notation makes it through the Markdown. Anyway.) Observe that the rest of the universe actually interacts with the underlying pure states |1> and |2>; the amplitudes only change the probabilities of outcomes (in Copenhagen) or the measure of worlds (in MW). For sufficiently small epsilon it does not seem to me that either of these changes is actually observable by any entity, conscious or otherwise. In that case, as I say, I do not quite understand what it means to say that a physical process has “computed” epsilon. Perhaps a round of Taboo is in order?
So, what I think is that for some continuous output and any epsilon you care to name, one can construct a totally normal computer with resources 1/delta that can approximate the continuous output to within epsilon.
Proceeding from there, the more interesting question (and the most observable question) is more like the computational complexity question—does delta shrink faster or slower than epsilon? If it shrinks sufficiently faster for some class of continuous outputs, this means we can build a real-number based computer that goes faster than a classical computer with the same resources.
In this sense, quantum computers are already hypercomputers for being able to factor numbers efficiently, but they’re not quite what I mean. So let me amend that to a slightly stronger sense where the machine actually can output something that would take infinite time to compute classically, we just only care to within precision epsilon :P