I wrote a hypercomputer 60-ish lines of python.
It’s (technically) more powerful than every supercomputer in the world.
Edit: actually, I spoke too soon. I have written code which outlines a general scheme that can be modified to construct schemes in which hyper computers could possible constructed (including itself). I haven’t proven that my scheme allows for hypercomputation, but a scheme similar to could (probably), including itself.
Edit: I was downvoted for this, which suppose was justified.
What my code does is simulates a modified version of CGoL (John Conway’s Game Of Life). It’s modified so that information can (technically) flow back through time. It’s very similar to what EY outlined in the second section of Casual Universes, except my algorithm is much simpler, and faster (it’d be even faster if I hadn’t done a half-assed job of coding it and choose a good language to write it in).
My scheme is more general than the code. I’ve tried explaining it on /r/cellular_automatta here and here, with a passable degree of success.
The scheme itself is capable of hypercomputation with the right underlying rules. I’ll write a quick demonstration, assuming you’ve read Casual Universes, and my explanations
in order to be capable of hyper computation it must be capable of regular computation. CA have already been proven to be Turing machines in disguise so I’ll take this for granted.
by the above, you should be able to construct a simulation of any turing machine in the CA. Again, this is a fact, so I’ll take it for granted
I’ve already said that the algorithm involves backwards information flow (time travel by another name)
by the above, we can construct a state in the CA which simulates a given Turing machine, then pipes it’s output back in time to a finite time after the simulation started
if we modify the simulation to instead just pipe the fact that the machine produce output, and nothing else (one bit), we can know before hand that a turing machine produces output.
I’d think anyone reading this is familiar, but this is called the Halting Problem, I think (I could be wrong, but I am highly confident I am not) my scheme solved it.
The only real problem is that if the T-machine doesn’t halt, neither will the one we constructed, but it will produce output after an arbitrarily finite amount of time.
This does mean my scheme is more powerful than an Turing machine. For instance, it can compute the busy beaver function to a proportional amount of values.
You can look at the code here, but it’s messily and hacked together. I only wrote it as a proof of concept in the first /r/cellular_automata thread I linked.
Your scheme may well he more powerful than a Turing machine (i.e., if there were something in the world that behaves according to your model then it could do computations impossible to a mere Turing machine) but much of what you write seems to indicate that you think you have implemented your scheme. In Python. On an actual computer in our universe.
Obviously that is impossible (unless Python running on an actual computer in our universe can do things beyond the capabilities of Turing machines, which it can’t).
Could you clarify explicitly whether you think what you have implemented is “more powerful than every supercomputer in the world” in any useful sense? What do you expect to happen if you feed your code a problem that has no Turing-computable solution? (What I expect to happen: either it turns out that you have a bug and your code emits a wrong answer, or your code runs for ever without producing the required output.)
I’m sorry that I over estimated my achievements. Thank you for being civil.
What do you expect to happen if you feed your code a problem that has no Turing-computable solution?
I’m actually quite interested in this. For something like the busy beaver function, it just runs forever with the output being just fuzzy and gets progressively less fuzzy but never being certain.
Although I wonder about something like super-tasks somehow being described for my model. You can definite get input from arbitrarily far in the future, but you can do even crazier things if you can achieve a transfinite number of branches.
If you’re still interested in this (I doubt you are, there are more important things you can do with you are time, but still) you glance at this reply I gave to taryneast describing how it checks if a turing machine halts. (I do have an ulterior motive in pointing you there, seeing as I want to find that one flaw I’m certain is lurking in my model somewhere)
Could some other people who have read Causal Universes comment on whether EY implies in it that hypercomputation is possible? What is hypercomputation?
EY doesn’t imply that hypercomputation is possible in our universe. He does imply that there are logically possible universes in which hypercomputation is possible. Hypercomputation is any computational process that is able to evaluate at least one non-Turing-computable function.
(That last definition may be unhelpful in practice. If our universe is finite then it can’t contain anything that’s even Turing-powerful, never mind capable of hypercomputation. If our own lifespan and perceptual powers are finite then there’s no way we could ever know anything was doing hypercomputation.)
Hypercomputation is any computational process that is able to evaluate at least one non-Turing-computable function.
What about computing a Turing-computable function in O(m) time, where a Turing machine needs O(n) and m < n? Does quantum computation count as hypercomputation? The Wikipedia page seems to say no.
I’d agree with Wikipedia. After all, there are plenty of other physically-realizable models of computation that are equivalent to Turing machines in terms of what they can compute but substantially inequivalent in terms of how long it takes.
[EDITED to add:] … Though I think anything you can do in a Newtonian universe can be modelled on a Turing machine with at worst a polynomial-in-problem-size slowdown. So there might be something to be said for counting as “hypercomputation” anything that, e.g., can evaluate all computable functions in constant time. But that isn’t the usual use of the term.
I’m willing to suspend judgement pending actual results.
Demonstrate it does what you claim and I’ll be very interested.
Note you probably already know this, but in case you don’t: AFAIK the Halting problem has a mathematical proof… you will require the same to prove that your system solves it.
ie just showing that it halts on many programs won’t be enough (Turing machines do this too).
You’ll have to mathematically prove that it halts for all possible problems.
For some strange reason, your post wasn’t picked up by my RSS feed and the little mail icon wasn’t orange,
Sorry to keep you waiting for a reply for so long.
The Halting proof is for Turing machines. My model isn’t a turing machine, it’s supposed to be more powerful.
You’ll have to mathematically prove that it halts for all possible problems.
Not to sound condescending, but this is why I’m posting it on a random internet forum and not sending it to a math professor or something.
I don’t think this is revolutionary, and I think there is very good possibility there is something wrong with my model.
I’ll tell you what convinced me that this is a hyper-computer though., and I’ll go a ahead and say I’m not overly familiar with the Halting inasmuch as I don’t understand the inner workings as well as I can parrot facts about it. I’ll let more experienced people tell me if this breaks some sort of conditional.
What my model essentially does is graft a time-travel formalism onto a something turing-complete. Since the turing -complete model of your choice is a special case of the model we just constructed, it’s already turing complete. And the formalism itself already specifies that information can travel backwards through time, what has to be proven is that an algorithm can be constructed that solves the halting problem.
With all of that, we can construct an algorithm based off of the following assumptions about time travel
Inconsistent timelines “don’t exist”*
A timeline is inconsistent if it sends back different information than it receives
If more than one timeline is consistent, then all are equally realized.
I have no idea if you read through the ramblings I linked, but the gist was that to simulate the model, at any given timestep the model receives all possible input from the future, organized into different branches. ‘possible’ is a important qualitfier, because the difference between the model being exponential in the size of the memory and exponential in an arbitrary quantity constrained to be smaller the size of the memory is whether you can tell if a given bit of memory is dependent on the future by looking at only the current state.
Next, I’ll point out that because the model allows computation to be carried out between receiving and sending messages, you can use the structure of the model to do computation. An illustration:
Suppose X is a turing machine you are interested in whether or not it halts
1. Receive extra input from the future (in the form of “X will halt after n timesteps” )
If null, goto 3 with n = 0
2. Is it properly formatted?
If not, halt without outputting t the past. (Halt.)
3. Simulate X for exactly n timesteps
If it halts before then, output “X will halt after m timesteps” where m is the number of cycles before it halted. Halt.
If it doesn’t halt after n timesteps, output “X will halt after n+1 timesteps”. Halt
I’ll note this algorithm only appeared to me after writing my posts.
Here’s how it works.
We can number each timeline branch based off of what it outputs to the past. If it outputs “X will halt after y timesteps” then it is machine y.
If X doesn’t halt, machine y will simulate X up until y-1 timesteps, and output that it wil halt after y timesteps.
The above point should be emphasized. Recall above how a timeline is inconsistent and therefore “non-existent” if it’s input doesn’t match it’s output. Thus, for a non-halting X, every machine y will be inconsistent, and the hyper-computer will halt immediately (things are a bit fuzzy here, I am 80% confident that if there is a problem with my model, it lies in this part).
If it halts at t=z, then y=z is that only consistent timeline. For timelines y>z, they output y+1, for timelines y<z, they output z. Making y=z a kind of attractor.
My problems with this timeline are as follows:
I haven’t been able to formulate the algorithm, without a) having every timeline inconsistent when the machine halts or b) the actual output uncertain (if it says it halts at z, you know for a fact it does, but if it says it doesn’t halt, then you can’t be sure)
I wrote a hypercomputer 60-ish lines of python. It’s (technically) more powerful than every supercomputer in the world.
Edit: actually, I spoke too soon. I have written code which outlines a general scheme that can be modified to construct schemes in which hyper computers could possible constructed (including itself). I haven’t proven that my scheme allows for hypercomputation, but a scheme similar to could (probably), including itself.
Edit: I was downvoted for this, which suppose was justified.
What my code does is simulates a modified version of CGoL (John Conway’s Game Of Life). It’s modified so that information can (technically) flow back through time. It’s very similar to what EY outlined in the second section of Casual Universes, except my algorithm is much simpler, and faster (it’d be even faster if I hadn’t done a half-assed job of coding it and choose a good language to write it in).
My scheme is more general than the code. I’ve tried explaining it on /r/cellular_automatta here and here, with a passable degree of success.
The scheme itself is capable of hypercomputation with the right underlying rules. I’ll write a quick demonstration, assuming you’ve read Casual Universes, and my explanations
in order to be capable of hyper computation it must be capable of regular computation. CA have already been proven to be Turing machines in disguise so I’ll take this for granted.
by the above, you should be able to construct a simulation of any turing machine in the CA. Again, this is a fact, so I’ll take it for granted
I’ve already said that the algorithm involves backwards information flow (time travel by another name)
by the above, we can construct a state in the CA which simulates a given Turing machine, then pipes it’s output back in time to a finite time after the simulation started
if we modify the simulation to instead just pipe the fact that the machine produce output, and nothing else (one bit), we can know before hand that a turing machine produces output.
I’d think anyone reading this is familiar, but this is called the Halting Problem, I think (I could be wrong, but I am highly confident I am not) my scheme solved it.
The only real problem is that if the T-machine doesn’t halt, neither will the one we constructed, but it will produce output after an arbitrarily finite amount of time.
This does mean my scheme is more powerful than an Turing machine. For instance, it can compute the busy beaver function to a proportional amount of values.
You can look at the code here, but it’s messily and hacked together. I only wrote it as a proof of concept in the first /r/cellular_automata thread I linked.
Your scheme may well he more powerful than a Turing machine (i.e., if there were something in the world that behaves according to your model then it could do computations impossible to a mere Turing machine) but much of what you write seems to indicate that you think you have implemented your scheme. In Python. On an actual computer in our universe.
Obviously that is impossible (unless Python running on an actual computer in our universe can do things beyond the capabilities of Turing machines, which it can’t).
Could you clarify explicitly whether you think what you have implemented is “more powerful than every supercomputer in the world” in any useful sense? What do you expect to happen if you feed your code a problem that has no Turing-computable solution? (What I expect to happen: either it turns out that you have a bug and your code emits a wrong answer, or your code runs for ever without producing the required output.)
I’m sorry that I over estimated my achievements. Thank you for being civil.
I’m actually quite interested in this. For something like the busy beaver function, it just runs forever with the output being just fuzzy and gets progressively less fuzzy but never being certain.
Although I wonder about something like super-tasks somehow being described for my model. You can definite get input from arbitrarily far in the future, but you can do even crazier things if you can achieve a transfinite number of branches.
If you’re still interested in this (I doubt you are, there are more important things you can do with you are time, but still) you glance at this reply I gave to taryneast describing how it checks if a turing machine halts. (I do have an ulterior motive in pointing you there, seeing as I want to find that one flaw I’m certain is lurking in my model somewhere)
Could some other people who have read Causal Universes comment on whether EY implies in it that hypercomputation is possible? What is hypercomputation?
EY doesn’t imply that hypercomputation is possible in our universe. He does imply that there are logically possible universes in which hypercomputation is possible. Hypercomputation is any computational process that is able to evaluate at least one non-Turing-computable function.
(That last definition may be unhelpful in practice. If our universe is finite then it can’t contain anything that’s even Turing-powerful, never mind capable of hypercomputation. If our own lifespan and perceptual powers are finite then there’s no way we could ever know anything was doing hypercomputation.)
What about computing a Turing-computable function in O(m) time, where a Turing machine needs O(n) and m < n? Does quantum computation count as hypercomputation? The Wikipedia page seems to say no.
I’d agree with Wikipedia. After all, there are plenty of other physically-realizable models of computation that are equivalent to Turing machines in terms of what they can compute but substantially inequivalent in terms of how long it takes.
[EDITED to add:] … Though I think anything you can do in a Newtonian universe can be modelled on a Turing machine with at worst a polynomial-in-problem-size slowdown. So there might be something to be said for counting as “hypercomputation” anything that, e.g., can evaluate all computable functions in constant time. But that isn’t the usual use of the term.
I’m willing to suspend judgement pending actual results. Demonstrate it does what you claim and I’ll be very interested.
Note you probably already know this, but in case you don’t: AFAIK the Halting problem has a mathematical proof… you will require the same to prove that your system solves it. ie just showing that it halts on many programs won’t be enough (Turing machines do this too). You’ll have to mathematically prove that it halts for all possible problems.
For some strange reason, your post wasn’t picked up by my RSS feed and the little mail icon wasn’t orange, Sorry to keep you waiting for a reply for so long.
The Halting proof is for Turing machines. My model isn’t a turing machine, it’s supposed to be more powerful.
Not to sound condescending, but this is why I’m posting it on a random internet forum and not sending it to a math professor or something.
I don’t think this is revolutionary, and I think there is very good possibility there is something wrong with my model.
I’ll tell you what convinced me that this is a hyper-computer though., and I’ll go a ahead and say I’m not overly familiar with the Halting inasmuch as I don’t understand the inner workings as well as I can parrot facts about it. I’ll let more experienced people tell me if this breaks some sort of conditional.
What my model essentially does is graft a time-travel formalism onto a something turing-complete. Since the turing -complete model of your choice is a special case of the model we just constructed, it’s already turing complete. And the formalism itself already specifies that information can travel backwards through time, what has to be proven is that an algorithm can be constructed that solves the halting problem.
With all of that, we can construct an algorithm based off of the following assumptions about time travel
Inconsistent timelines “don’t exist”*
A timeline is inconsistent if it sends back different information than it receives
If more than one timeline is consistent, then all are equally realized.
I have no idea if you read through the ramblings I linked, but the gist was that to simulate the model, at any given timestep the model receives all possible input from the future, organized into different branches. ‘possible’ is a important qualitfier, because the difference between the model being exponential in the size of the memory and exponential in an arbitrary quantity constrained to be smaller the size of the memory is whether you can tell if a given bit of memory is dependent on the future by looking at only the current state.
Next, I’ll point out that because the model allows computation to be carried out between receiving and sending messages, you can use the structure of the model to do computation. An illustration:
Suppose X is a turing machine you are interested in whether or not it halts
1. Receive extra input from the future (in the form of “X will halt after n timesteps” )
If null, goto 3 with n = 0
2. Is it properly formatted?
If not, halt without outputting t the past. (Halt.)
3. Simulate X for exactly n timesteps
If it halts before then, output “X will halt after m timesteps” where m is the number of cycles before it halted. Halt.
If it doesn’t halt after n timesteps, output “X will halt after n+1 timesteps”. Halt
I’ll note this algorithm only appeared to me after writing my posts.
Here’s how it works.
We can number each timeline branch based off of what it outputs to the past. If it outputs “X will halt after y timesteps” then it is machine y.
If X doesn’t halt, machine y will simulate X up until y-1 timesteps, and output that it wil halt after y timesteps.
The above point should be emphasized. Recall above how a timeline is inconsistent and therefore “non-existent” if it’s input doesn’t match it’s output. Thus, for a non-halting X, every machine y will be inconsistent, and the hyper-computer will halt immediately (things are a bit fuzzy here, I am 80% confident that if there is a problem with my model, it lies in this part).
If it halts at t=z, then y=z is that only consistent timeline. For timelines y>z, they output y+1, for timelines y<z, they output z. Making y=z a kind of attractor.
My problems with this timeline are as follows:
I haven’t been able to formulate the algorithm, without a) having every timeline inconsistent when the machine halts or b) the actual output uncertain (if it says it halts at z, you know for a fact it does, but if it says it doesn’t halt, then you can’t be sure)
“non-existent” timelines have casual weight.