All Turing-complete languages are equivalent, so at the end what really matters is the convenience of use.
Well, any Turing complete language can implement a sort, but not every language can implement a sort in O(n log n) time, or a sort that uses O(log n) additional memory. Sorry, just being nitpicky.
True, though any turing-complete language can definitely, provably implement a sort that uses O(n log n) + k time or uses O(log n) + k additional memory (or straight-up all of them can do O(log n) memory if they have hard drive write access and OS auth to run other programs). “k” here is the amount of time it takes for this program to write and compile a compiler for some other, more efficient language, and then write the algorithm in that language, compile, and then run that.
At the extreme, the program could be an AI specialized in figuring out how to code the most efficient sorting algorithm given certain hardware specs, then figure out the most efficient language to write that in, then write it, then save it, then run it and delete itself.
Obviously, this is far, far from being convenient or efficient.
My comment viewed a Turing complete language as a sort of self-contained mathematical thing, which doesn’t necessarily have access to the operation “ask the OS to run this sequence of x86 instructions”. When people talk about Turing equivalence, that’s typically the meaning they use.
Interpreting one language in another is okay, but interpretation doesn’t necessarily preserve time and space complexity (try interpreting an impure language in a pure one).
Yeah, alright, thanks for that clarification. I didn’t realize I was thinking about it in terms of “Java/Python/C/etc. running on some traditional computer”.
Agreed that interpretation is quite prone to overhead and delays.
Well, any Turing complete language can implement a sort, but not every language can implement a sort in O(n log n) time, or a sort that uses O(log n) additional memory. Sorry, just being nitpicky.
True, though any turing-complete language can definitely, provably implement a sort that uses O(n log n) + k time or uses O(log n) + k additional memory (or straight-up all of them can do O(log n) memory if they have hard drive write access and OS auth to run other programs). “k” here is the amount of time it takes for this program to write and compile a compiler for some other, more efficient language, and then write the algorithm in that language, compile, and then run that.
At the extreme, the program could be an AI specialized in figuring out how to code the most efficient sorting algorithm given certain hardware specs, then figure out the most efficient language to write that in, then write it, then save it, then run it and delete itself.
Obviously, this is far, far from being convenient or efficient.
My comment viewed a Turing complete language as a sort of self-contained mathematical thing, which doesn’t necessarily have access to the operation “ask the OS to run this sequence of x86 instructions”. When people talk about Turing equivalence, that’s typically the meaning they use.
Interpreting one language in another is okay, but interpretation doesn’t necessarily preserve time and space complexity (try interpreting an impure language in a pure one).
Yeah, alright, thanks for that clarification. I didn’t realize I was thinking about it in terms of “Java/Python/C/etc. running on some traditional computer”.
Agreed that interpretation is quite prone to overhead and delays.