It seems like you can go beyond Turing machines long as you’re willing to take the output in the form of something a Turing machine cannot output. But “let’s measure this physical system and then write down some finite-length numbers” is still numbers, something a Turing machine can do. Instead, a physical hypercomputer can have super-Turing precision at stuff like “take these input voltages and give me an output voltage.”
It’s one of those hypothesized-continuous quantities (it’s one of those because position is quite possibly continuous, and voltage is a hypothesized-continuous function of position). This means that you can have any voltage between 0 and 1. If you want to input numbers into a Turing machine, though, they have to have finite Kolmogorov complexity. So there are voltages you can have that can’t be described in finite time.
This is what I mean by “take an output in some form”—if you take an output in the form of a voltage, you can have more voltages than you have finite-length descriptions of voltages. Which means that the voltage itself has to be the output—not a description of the voltage, which would obviously be describable, and thus (barring some goedelian hijinks, perhaps) replicable by a turing machine.
In classical electro, voltage is a continuous function of position; agreed. It is not, however, clear to me that this is true in a QM formulation. If you consider the case of a hydrogen atom, for example, the possible energies of its electron are quantised; since those energies are binding energies due to the voltage of the proton, it is not clear to me that a voltage between two allowed energy states meaningfully exists. At any rate it seems that no physical machine could output such a voltage; what would this mean? And since a physical machine ultimately consists of atoms, and sums over quantised states are themselves quantised, well then.
Your integral treats distance as unquantised; it is not clear that the true QM theory does this—Planck distance. Moreover, implemented as a physical machine, your atoms are going to be bound together somehow, those bonds will be quantised, and then the average distance is itself quantised because you are dealing with sums over states with a definite average interatomic distance—you can move the whole machine, but you can’t move just a part of the machine with arbitrary precision, you have to go between specific combinations of quantised interatomic binding states. Finally, just because a theory can express some quantity mathematically doesn’t mean that the quantity meaningfully exists in the modelled system; what are the physical consequences of having voltage be X rather than X+epsilon? If you (or any physical system you care to name) can’t measure the difference then it’s not clear to me in what sense your machine is “outputting” the voltage.
So basically, what you’re asking for is a finite-length procedure that will tell an irrational-number output from a finite-description-length output? The trouble is, there’s no such procedure, as long as you can have a turing machine big enough to fool the finite-length procedure.
If you knew the size of the machine, though, you might be able to establish efficiency constraints and do a test, though.
As for the physics, I agree, fundamental quantization is possible, if untested. Hence why I said things like “hypothesized-continuous.” Though once we start taking averages (the < > brackets), you can still have a superposition with any average—to get around that you’d need quantum amplitude to be quantized (possible).
you can still have a superposition with any average
Ok, now the hypothesized-continuous quantity isn’t so much voltage as quantum amplitude. Which actually is a rather better argument in the first place, so let’s run with that!
I would then ask, is there really a meaningful physical difference between the state A|1> + B|2>, and the state (A+epsilon)|1> + (B-epsilon)|2>? (Let’s hope the ket notation makes it through the Markdown. Anyway.) Observe that the rest of the universe actually interacts with the underlying pure states |1> and |2>; the amplitudes only change the probabilities of outcomes (in Copenhagen) or the measure of worlds (in MW). For sufficiently small epsilon it does not seem to me that either of these changes is actually observable by any entity, conscious or otherwise. In that case, as I say, I do not quite understand what it means to say that a physical process has “computed” epsilon. Perhaps a round of Taboo is in order?
So, what I think is that for some continuous output and any epsilon you care to name, one can construct a totally normal computer with resources 1/delta that can approximate the continuous output to within epsilon.
Proceeding from there, the more interesting question (and the most observable question) is more like the computational complexity question—does delta shrink faster or slower than epsilon? If it shrinks sufficiently faster for some class of continuous outputs, this means we can build a real-number based computer that goes faster than a classical computer with the same resources.
In this sense, quantum computers are already hypercomputers for being able to factor numbers efficiently, but they’re not quite what I mean. So let me amend that to a slightly stronger sense where the machine actually can output something that would take infinite time to compute classically, we just only care to within precision epsilon :P
Given the presence of various forms of noise, I’m not sure what it would mean to measure a voltage to kilobits worth of precision. At some point I start asking how you ensure that the noise from electrons tunneling into and out of your experimental apparatus is handled. I’m also not sure that there’s even a theoretical sense in which you can make your machine cold enough to reduce the noise enough to get that sort of precision.
I understand what a nine-digit voltmeter looks like. And I can extrapolate based on that, assume some things about improved materials, temperature controls, reduced temperature, extended measurement times, and so on, and would be willing to believe that the next nine digits are “mere engineering”. Maybe even the nine digits after that, and the next nine. But taking that to mean you can extend it out to infinity—literally! -- seems like a logical and physical fallacy.
Fair enough. In the low-noise limit, a continuous computer actually has to become a type of quantum computer, where the output is just some continuously-valued quantum state.
But that’s a doable thing, that we can actually do. Hm. Sort of.
This is a confusion about what a Turing machine represents. The method of output data isn’t relevant here. The voltage is simply one way of representing numbers.
It seems like you can go beyond Turing machines long as you’re willing to take the output in the form of something a Turing machine cannot output. But “let’s measure this physical system and then write down some finite-length numbers” is still numbers, something a Turing machine can do. Instead, a physical hypercomputer can have super-Turing precision at stuff like “take these input voltages and give me an output voltage.”
What is special about voltage?
It’s one of those hypothesized-continuous quantities (it’s one of those because position is quite possibly continuous, and voltage is a hypothesized-continuous function of position). This means that you can have any voltage between 0 and 1. If you want to input numbers into a Turing machine, though, they have to have finite Kolmogorov complexity. So there are voltages you can have that can’t be described in finite time.
This is what I mean by “take an output in some form”—if you take an output in the form of a voltage, you can have more voltages than you have finite-length descriptions of voltages. Which means that the voltage itself has to be the output—not a description of the voltage, which would obviously be describable, and thus (barring some goedelian hijinks, perhaps) replicable by a turing machine.
In classical electro, voltage is a continuous function of position; agreed. It is not, however, clear to me that this is true in a QM formulation. If you consider the case of a hydrogen atom, for example, the possible energies of its electron are quantised; since those energies are binding energies due to the voltage of the proton, it is not clear to me that a voltage between two allowed energy states meaningfully exists. At any rate it seems that no physical machine could output such a voltage; what would this mean? And since a physical machine ultimately consists of atoms, and sums over quantised states are themselves quantised, well then.
Let “voltage” be < k*e^2/r^2 >, summed over pairs of different particles.
Your integral treats distance as unquantised; it is not clear that the true QM theory does this—Planck distance. Moreover, implemented as a physical machine, your atoms are going to be bound together somehow, those bonds will be quantised, and then the average distance is itself quantised because you are dealing with sums over states with a definite average interatomic distance—you can move the whole machine, but you can’t move just a part of the machine with arbitrary precision, you have to go between specific combinations of quantised interatomic binding states. Finally, just because a theory can express some quantity mathematically doesn’t mean that the quantity meaningfully exists in the modelled system; what are the physical consequences of having voltage be X rather than X+epsilon? If you (or any physical system you care to name) can’t measure the difference then it’s not clear to me in what sense your machine is “outputting” the voltage.
So basically, what you’re asking for is a finite-length procedure that will tell an irrational-number output from a finite-description-length output? The trouble is, there’s no such procedure, as long as you can have a turing machine big enough to fool the finite-length procedure.
If you knew the size of the machine, though, you might be able to establish efficiency constraints and do a test, though.
As for the physics, I agree, fundamental quantization is possible, if untested. Hence why I said things like “hypothesized-continuous.” Though once we start taking averages (the < > brackets), you can still have a superposition with any average—to get around that you’d need quantum amplitude to be quantized (possible).
Ok, now the hypothesized-continuous quantity isn’t so much voltage as quantum amplitude. Which actually is a rather better argument in the first place, so let’s run with that!
I would then ask, is there really a meaningful physical difference between the state A|1> + B|2>, and the state (A+epsilon)|1> + (B-epsilon)|2>? (Let’s hope the ket notation makes it through the Markdown. Anyway.) Observe that the rest of the universe actually interacts with the underlying pure states |1> and |2>; the amplitudes only change the probabilities of outcomes (in Copenhagen) or the measure of worlds (in MW). For sufficiently small epsilon it does not seem to me that either of these changes is actually observable by any entity, conscious or otherwise. In that case, as I say, I do not quite understand what it means to say that a physical process has “computed” epsilon. Perhaps a round of Taboo is in order?
So, what I think is that for some continuous output and any epsilon you care to name, one can construct a totally normal computer with resources 1/delta that can approximate the continuous output to within epsilon.
Proceeding from there, the more interesting question (and the most observable question) is more like the computational complexity question—does delta shrink faster or slower than epsilon? If it shrinks sufficiently faster for some class of continuous outputs, this means we can build a real-number based computer that goes faster than a classical computer with the same resources.
In this sense, quantum computers are already hypercomputers for being able to factor numbers efficiently, but they’re not quite what I mean. So let me amend that to a slightly stronger sense where the machine actually can output something that would take infinite time to compute classically, we just only care to within precision epsilon :P
Given the presence of various forms of noise, I’m not sure what it would mean to measure a voltage to kilobits worth of precision. At some point I start asking how you ensure that the noise from electrons tunneling into and out of your experimental apparatus is handled. I’m also not sure that there’s even a theoretical sense in which you can make your machine cold enough to reduce the noise enough to get that sort of precision.
I understand what a nine-digit voltmeter looks like. And I can extrapolate based on that, assume some things about improved materials, temperature controls, reduced temperature, extended measurement times, and so on, and would be willing to believe that the next nine digits are “mere engineering”. Maybe even the nine digits after that, and the next nine. But taking that to mean you can extend it out to infinity—literally! -- seems like a logical and physical fallacy.
Fair enough. In the low-noise limit, a continuous computer actually has to become a type of quantum computer, where the output is just some continuously-valued quantum state.
But that’s a doable thing, that we can actually do. Hm. Sort of.
This is a confusion about what a Turing machine represents. The method of output data isn’t relevant here. The voltage is simply one way of representing numbers.