True, though any turing-complete language can definitely, provably implement a sort that uses O(n log n) + k time or uses O(log n) + k additional memory (or straight-up all of them can do O(log n) memory if they have hard drive write access and OS auth to run other programs). “k” here is the amount of time it takes for this program to write and compile a compiler for some other, more efficient language, and then write the algorithm in that language, compile, and then run that.
At the extreme, the program could be an AI specialized in figuring out how to code the most efficient sorting algorithm given certain hardware specs, then figure out the most efficient language to write that in, then write it, then save it, then run it and delete itself.
Obviously, this is far, far from being convenient or efficient.
My comment viewed a Turing complete language as a sort of self-contained mathematical thing, which doesn’t necessarily have access to the operation “ask the OS to run this sequence of x86 instructions”. When people talk about Turing equivalence, that’s typically the meaning they use.
Interpreting one language in another is okay, but interpretation doesn’t necessarily preserve time and space complexity (try interpreting an impure language in a pure one).
Yeah, alright, thanks for that clarification. I didn’t realize I was thinking about it in terms of “Java/Python/C/etc. running on some traditional computer”.
Agreed that interpretation is quite prone to overhead and delays.
True, though any turing-complete language can definitely, provably implement a sort that uses O(n log n) + k time or uses O(log n) + k additional memory (or straight-up all of them can do O(log n) memory if they have hard drive write access and OS auth to run other programs). “k” here is the amount of time it takes for this program to write and compile a compiler for some other, more efficient language, and then write the algorithm in that language, compile, and then run that.
At the extreme, the program could be an AI specialized in figuring out how to code the most efficient sorting algorithm given certain hardware specs, then figure out the most efficient language to write that in, then write it, then save it, then run it and delete itself.
Obviously, this is far, far from being convenient or efficient.
My comment viewed a Turing complete language as a sort of self-contained mathematical thing, which doesn’t necessarily have access to the operation “ask the OS to run this sequence of x86 instructions”. When people talk about Turing equivalence, that’s typically the meaning they use.
Interpreting one language in another is okay, but interpretation doesn’t necessarily preserve time and space complexity (try interpreting an impure language in a pure one).
Yeah, alright, thanks for that clarification. I didn’t realize I was thinking about it in terms of “Java/Python/C/etc. running on some traditional computer”.
Agreed that interpretation is quite prone to overhead and delays.