Yes, but that’s not relevant to the definition of Turing equivalence/completeness/universality.
Every Turing machine definition I’ve ever seen says that the tape has to be truly unbounded. How that’s formalized varies, but it always carries the sense that the program doesn’t ever have to worry about running out of tape. And every definition of Turing equivalence I’ve ever seen boils down to “can do any computation a Turing machine can do, with at most a bounded speedup or slowdown”. Which means that programs on Turing equivalent computer must not have to worry about running out of storage.
You can’t in fact build a computer that can run any arbitrary program and never run out of storage.
One of the explicitly stated conditions of the definition is not met. How is that not relevant to the definition?
The question isn’t if the specific computer at your hands
Your title says “finite physical device”. Any finite physical device (or at least any constructible finite physical device) can at least in principle be “the specific computer at your hands”. For a finite physical device to be Turing equivalent, there would have to be a specific finite physical device that actually was Turing-equivalent. And no such device can ever actually be constructed. In fact no such device could exist even if it popped into being without even having to be constructed.
can solve all Turing-computable problems, but rather if we had the ability to scale a computer’s memory, time and reliability indefinitely, could we solve the problem on an unbounded input and output domain without changing the code/descriptor?
I don’t think that is the question, and perhaps more importantly I don’t think that’s an interesting question. You don’t have that ability, you won’t get that ability, and you’ll never get close enough that it’s practical to ignore the limitation. So who cares?
… and if you’re going to talk in terms of fundamental math definitions that everybody uses, I think you have to stick to what they conventionally mean.
And for a lot of popular programming languages, like Lisp or Lambda Calculus, this is true.
Lisp is obviously Turing-complete. Any Lisp interpreter actually realized on any finite physical computer isn’t and can’t ever be. If you keep sticking more and more cells onto a list, eventually the Lisp abstraction will be violated by the program crashing with an out-of-memory error. You can’t actually implement “full Lisp” in the physical world.
On X86 being Turing Complete in at least 3 ways:
OK, it’s possible that there’s some subset of the X86 machine language that’s Turing equivalent the same way Lisp is. I’m not going to go and try to figure out whatever hackery the examples do, since it’s probably very complicated and will probably never be of any actual use. But if there is, it’s still not Turing equivalent as actually implemented in any actual device.
Any actual physically constructed X86 computer will have a finite number of possible states, no matter what operations you use to manipulate them. There are only so many address wires coming out of the chip. There are only so many registers, memory cells, or whatever. Even if you put a Turing tape reader on it as a peripheral, there’s still a hard limit on how much tape it can actually have.
If you write a program that ignores that reality, and put it in an actual X86 computer, you won’t have created a Turing complete physical computer. When the input gets to a certain size, the program just won’t work. The physical hardware can’t support pushing the abstract language past a certain limit.
In the same way that you can switch to a computer with more memory, you can always switch to higher fixed-precision to run a transformer on something that needs that extra boost to execute properly.
No, you can’t. It’s possible to have a problem that requires so much precision that you can’t physically construct enough memory to hold even a single number.
(with the enormous caveat that if we care about how efficient a program is, and don’t just care about whether we can solve a problem, then algorithmic considerations become relevant),
A useful definition of “can” has to take efficiency into account, because there are some resources you actually can’t provide. There’s not a lot of point in saying you “can” solve a problem when you really have no hope of applying the needed resources.
We use that practically all the time. That’s how cryptography works: you assume that your adversary won’t be able to do more than N operations in X time, where X is how long the cryptography has to be effective for.
the bottleneck is energy, which gives us memory, time and reliability.
Maybe, although I don’t think we can at present turn energy in just any form into any of the above, and I’m not sure that, in principle, unlimited energy translates into unlimited anything else. If I have some huge amount of energy in some tiny space, I have a black hole, not a computer.
And from the perspective of the early 20th century, this was no small feat.
… but even if that were true, it wouldn’t make finite physical computers Turing equivalent.
If the plural weren’t “octopuses”, it would be “octopodes”. Not everything is Latin.