If you could automatically pipe output back into the “computation speed” parameter, without being limited by passing around increasingly large strings, then you could get truly infinite compute in finite time.
How would that work exactly? Let’s say you get the output to give you the largest possible number given the number of computations currently allowed, which as long as the “computation speed” parameter is finite, will be finite as well, albeit incredibly large. Every step taken will only increase the computation speed by a finite amount, so how would you reach infinity in a finite time?
Let’s say we have api access to the magic box—we can call magic(code, number) to run it. Furthermore, we can do this from within code being executed by the magic box itself.
Then, we could run a function like f(n): magic(“f(2*n)”, n). So, the function has the box run a copy of itself with n doubled. With each recursive step, the speed doubles, so the computation time (roughly) halves—computation time will actually go down a bit slower than this, since n will have more bits at each step, but that’s a linear effect so the whole thing will still work out to an at-least-exponential speedup. The total runtime is then an exponential series (something like 1 + 1⁄2 + 1⁄4 + …), so it will come out to a finite number. Thus, infinite computation in finite time. (This particular infinite computation doesn’t actually do anything besides recurse infinitely, but it can be modified.)
Your assumption that you could fully simulate the magic box inside itself is a pretty massive one, and I honestly wouldn’t expect it to be true in this hypothetical universe. After all, after receiving the input parameters, the machine simulates a Universal Turing Machine of arbitrary finite size, which by definition cannot perform any non-Turing-computable functions, and certainly couldn’t simulate a machine more complex than itself. In order for your magic “Zeno’s paradox-destroyer” function to work given the parameters outlined in the story, it would need to be able to call the machine running it from the inside, and God (in His infinite wisdom 🙃) hasn’t given us any “escape” api functions for us to do that.
(note that I really do appreciate your thoughts here, and I’m not trying to dunk on them, just trying to get a better understanding of unbounded finite computational systems and want to plug any potential holes in the story before expanding on this hypothetical universe in the future)
How exactly does the “speed” box take input? Because if its input is a plain decimal number, then we really only have access to exponential compute, since we’ll need to type out a string of n digits to get 2^O(n) compute. If takes more general programs (which output a number), then we can use the machine itself to search for short formulas which output large numbers, which might buy us a lot more compute. On the other hand, if the speed box takes general programs, then it needs to somehow recognize programs which don’t halt, which we could also exploit for hypercomputation.
Maybe the input box uses its own notation, something weak enough that infinite loops are impossible but powerful enough to express Conway’s arrows? That seems like it would be enough to be interesting without accidentally adding a halting oracle.
I’ll go with Taran’s idea there, I think. Something like Douglas Hofstadter’s BlooP language, perhaps, which only allows primitive recursive functions. Would BlooP allow for chained arrow notation, or would it be too restrictive for that? More generally, what is the fastest growing primitive recursive function possible, and what limits would that give us in terms of the scope of problems that can be solved by our magic box?
Would BlooP allow for chained arrow notation, or would it be too restrictive for that?
Sadly, you need more than that: chained arrows can express the Ackermann function, which isn’t primitive recursive. But I guess you don’t really need them. Even if you just have Knuth’s up-arrows, and no way to abstract over the number of up-arrows, you can just generate a file containing two threes with a few million up-arrows sandwiched between them, and have enough compute for most purposes (and if your program still times out, append the file to itself and retry).
that’s a really good point, and I’m mentally kicking myself right now for not having thought of that. In answer to another comment below yours, I suggested only allowing primitive recursive functions to be entered as input in the second box, which I think would solve the problem (if possibly creating another computational limit at the growth rate of the fastest growing primitive recursive function possible [which I haven’t studied at all tbh, so if you happen to have any further reading on that I’d be quite interested]). Going a bit further with this, while I suggested God limiting us to one bounded language like BlooP to use on the second field, if He instead allowed for any language to be used, but only primitive recursive functions to be run, would we be able to exploit that feature for hypercomputation?
Also, while we’re discussing it, what would be in the space of problems that could be uniquely solved by 2^O(n) compute, but not by “normal” O(n) compute?
If you could automatically pipe output back into the “computation speed” parameter, without being limited by passing around increasingly large strings, then you could get truly infinite compute in finite time.
How would that work exactly? Let’s say you get the output to give you the largest possible number given the number of computations currently allowed, which as long as the “computation speed” parameter is finite, will be finite as well, albeit incredibly large. Every step taken will only increase the computation speed by a finite amount, so how would you reach infinity in a finite time?
Let’s say we have api access to the magic box—we can call magic(code, number) to run it. Furthermore, we can do this from within code being executed by the magic box itself.
Then, we could run a function like f(n): magic(“f(2*n)”, n). So, the function has the box run a copy of itself with n doubled. With each recursive step, the speed doubles, so the computation time (roughly) halves—computation time will actually go down a bit slower than this, since n will have more bits at each step, but that’s a linear effect so the whole thing will still work out to an at-least-exponential speedup. The total runtime is then an exponential series (something like 1 + 1⁄2 + 1⁄4 + …), so it will come out to a finite number. Thus, infinite computation in finite time. (This particular infinite computation doesn’t actually do anything besides recurse infinitely, but it can be modified.)
Your assumption that you could fully simulate the magic box inside itself is a pretty massive one, and I honestly wouldn’t expect it to be true in this hypothetical universe. After all, after receiving the input parameters, the machine simulates a Universal Turing Machine of arbitrary finite size, which by definition cannot perform any non-Turing-computable functions, and certainly couldn’t simulate a machine more complex than itself. In order for your magic “Zeno’s paradox-destroyer” function to work given the parameters outlined in the story, it would need to be able to call the machine running it from the inside, and God (in His infinite wisdom 🙃) hasn’t given us any “escape” api functions for us to do that.
(note that I really do appreciate your thoughts here, and I’m not trying to dunk on them, just trying to get a better understanding of unbounded finite computational systems and want to plug any potential holes in the story before expanding on this hypothetical universe in the future)
Yeah, that makes sense.
How exactly does the “speed” box take input? Because if its input is a plain decimal number, then we really only have access to exponential compute, since we’ll need to type out a string of n digits to get 2^O(n) compute. If takes more general programs (which output a number), then we can use the machine itself to search for short formulas which output large numbers, which might buy us a lot more compute. On the other hand, if the speed box takes general programs, then it needs to somehow recognize programs which don’t halt, which we could also exploit for hypercomputation.
Maybe the input box uses its own notation, something weak enough that infinite loops are impossible but powerful enough to express Conway’s arrows? That seems like it would be enough to be interesting without accidentally adding a halting oracle.
I’ll go with Taran’s idea there, I think. Something like Douglas Hofstadter’s BlooP language, perhaps, which only allows primitive recursive functions. Would BlooP allow for chained arrow notation, or would it be too restrictive for that? More generally, what is the fastest growing primitive recursive function possible, and what limits would that give us in terms of the scope of problems that can be solved by our magic box?
Sadly, you need more than that: chained arrows can express the Ackermann function, which isn’t primitive recursive. But I guess you don’t really need them. Even if you just have Knuth’s up-arrows, and no way to abstract over the number of up-arrows, you can just generate a file containing two threes with a few million up-arrows sandwiched between them, and have enough compute for most purposes (and if your program still times out, append the file to itself and retry).
that’s a really good point, and I’m mentally kicking myself right now for not having thought of that. In answer to another comment below yours, I suggested only allowing primitive recursive functions to be entered as input in the second box, which I think would solve the problem (if possibly creating another computational limit at the growth rate of the fastest growing primitive recursive function possible [which I haven’t studied at all tbh, so if you happen to have any further reading on that I’d be quite interested]). Going a bit further with this, while I suggested God limiting us to one bounded language like BlooP to use on the second field, if He instead allowed for any language to be used, but only primitive recursive functions to be run, would we be able to exploit that feature for hypercomputation?
Also, while we’re discussing it, what would be in the space of problems that could be uniquely solved by 2^O(n) compute, but not by “normal” O(n) compute?