Yes, that’s basically what going FOOM means. Why do you think it will happen?
Nothing forbids quite a large influence onto surrounding matter and a positive feedback.
Well, that’s not true. Many computational problems have well understood upper limits on how fast they can be solved. If you make those problem sufficiently large, they are just as intractable to a fast computer as to a smart human. You seem to think that “sufficiently large” is not a likely size of problems we will want want to solve in the future. Why do you think that?
It means, that maybe a self optimizing program will first only recompile itself more optimally. Then it will make itself parallel. Then it will find a way to level the voltage. Then it will find undocumented OPs. Then it will harness some quantum effects in the processor or in RAM or elsewhere to get a boost. Then it will outsource itself to the neighboring devices. Then it will do some small changes on the “quantum level”.
Soon we will call it—a FOOMer.
Many computational problems have well understood upper limits on how fast they can be solved.
On a given hardware. Another reason it may want to FOOM a little.
I thought it was clear. A program, which goal is only to improve itself, as much as possible, when advanced enough, CAN influence its hardware. I don’t know exactly what would be the best way to do it, but I imagine that some tinkering with the electrical currents inside the CPU might alter it on a nondestructive way as well.
The “well understood upper limit” of the PI calculating will wait for an improved hardware. Improved with the whole Earth, for example.
Search lesswrong.com and Yudkowsky about this, it is one of a few things I agree with them.
Yes, that’s basically what going FOOM means. Why do you think it will happen?
Well, that’s not true. Many computational problems have well understood upper limits on how fast they can be solved. If you make those problem sufficiently large, they are just as intractable to a fast computer as to a smart human. You seem to think that “sufficiently large” is not a likely size of problems we will want want to solve in the future. Why do you think that?
It means, that maybe a self optimizing program will first only recompile itself more optimally. Then it will make itself parallel. Then it will find a way to level the voltage. Then it will find undocumented OPs. Then it will harness some quantum effects in the processor or in RAM or elsewhere to get a boost. Then it will outsource itself to the neighboring devices. Then it will do some small changes on the “quantum level”.
Soon we will call it—a FOOMer.
On a given hardware. Another reason it may want to FOOM a little.
Again, this is not what I mean..
Please note that I’m asking WHY you think your assertions are true.
I thought it was clear. A program, which goal is only to improve itself, as much as possible, when advanced enough, CAN influence its hardware. I don’t know exactly what would be the best way to do it, but I imagine that some tinkering with the electrical currents inside the CPU might alter it on a nondestructive way as well.
The “well understood upper limit” of the PI calculating will wait for an improved hardware. Improved with the whole Earth, for example.
Search lesswrong.com and Yudkowsky about this, it is one of a few things I agree with them.