The main point is that humans do recursively self improve, on some level, in some fasion. Why should we expect a formal computer that recursively self improves to reach some greater heights?
There are many reasons, but here are just a few that should be sufficient: it’s much, much easier for a computer program to change its own program (which, having been artificially designed, would be far more modular and self-comprehensible than the human brain and genome, independently of how much easier it is to change bits in memory than synapses in a brain) than it is for a human being to change their own program (which is embedded in a brain that takes decades to mature and is a horrible mess of poorly understood, interdependent spaghetti code); a computer program can safely and easily make perfect copies of itself for experimentation and can try out different ideas on these copies; and a computer program can trivially scale up by adding more hardware (assuming it was designed to be parallelizable, which it would be).
First of all, it’s purely conjecture that a programmed system of near human intelligence would be any simpler than a human brain. A highly complicated program such as a modern OS is practically incomprehensible to a single individual.
Second of all, there is no direct correlation between speed and intelligence. Just because a computer can scale up for more processing power doesn’t mean that it’s any smarter. Hence it can’t all of a sudden use this technique to RSI “foom”.
Third, making copies of itself is a non-trivial activity with which amounts to self-simulating itself, which amounts to an exponential reduction in its processing power available. I don’t see the GAI being able to make copies of itself much easier than say, two humans …reproducing… and waiting 9 months to get a baby.
it’s conjecture, yes, but not pure conjecture. Natural selection doesn’t optimize, it satisfices, and the slow process of accreting new features and repurposing existing systems for alternative uses ensures that there’s lots of redundancy, with lots of room for simplification and improvement. When has the artificial solution ever been as complex as the naturally evolved alternative it replaced, and why should the human brain be any different?
Intelligence tests are timed for a reason, and that’s because speed is one aspect of intelligence. If the program is smart enough (which it is by hypothesis) that it will eventually comes across the right theory, consider the right hypothesis, develop the appropriate mathematics, etc., at some point (just as we might argue the smartest human beings are), then more processing power results in that happening much faster, since the many dead ends can be reached faster, and the alternatives explored more quickly.
Making a copy of itself requires a handful of machine instructions, and sending that copy to a new processing node with instructions on what hypotheses to investigate is a few more instructions. I feel like I’m being trolled here, with the suggestion that copying a big number in computer memory from one location to another can’t be done any more easily than creating a human baby (and don’t forget educating it for 20 years).
A highly complicated program such as a modern OS is practically incomprehensible to a single individual.
And yet its source code is much more comprehensible (and, crucially, much more maintainable) than the DNA of even a very simple single-celled organism.
There are many reasons, but here are just a few that should be sufficient: it’s much, much easier for a computer program to change its own program (which, having been artificially designed, would be far more modular and self-comprehensible than the human brain and genome, independently of how much easier it is to change bits in memory than synapses in a brain) than it is for a human being to change their own program (which is embedded in a brain that takes decades to mature and is a horrible mess of poorly understood, interdependent spaghetti code); a computer program can safely and easily make perfect copies of itself for experimentation and can try out different ideas on these copies; and a computer program can trivially scale up by adding more hardware (assuming it was designed to be parallelizable, which it would be).
First of all, it’s purely conjecture that a programmed system of near human intelligence would be any simpler than a human brain. A highly complicated program such as a modern OS is practically incomprehensible to a single individual.
Second of all, there is no direct correlation between speed and intelligence. Just because a computer can scale up for more processing power doesn’t mean that it’s any smarter. Hence it can’t all of a sudden use this technique to RSI “foom”.
Third, making copies of itself is a non-trivial activity with which amounts to self-simulating itself, which amounts to an exponential reduction in its processing power available. I don’t see the GAI being able to make copies of itself much easier than say, two humans …reproducing… and waiting 9 months to get a baby.
it’s conjecture, yes, but not pure conjecture. Natural selection doesn’t optimize, it satisfices, and the slow process of accreting new features and repurposing existing systems for alternative uses ensures that there’s lots of redundancy, with lots of room for simplification and improvement. When has the artificial solution ever been as complex as the naturally evolved alternative it replaced, and why should the human brain be any different?
Intelligence tests are timed for a reason, and that’s because speed is one aspect of intelligence. If the program is smart enough (which it is by hypothesis) that it will eventually comes across the right theory, consider the right hypothesis, develop the appropriate mathematics, etc., at some point (just as we might argue the smartest human beings are), then more processing power results in that happening much faster, since the many dead ends can be reached faster, and the alternatives explored more quickly.
Making a copy of itself requires a handful of machine instructions, and sending that copy to a new processing node with instructions on what hypotheses to investigate is a few more instructions. I feel like I’m being trolled here, with the suggestion that copying a big number in computer memory from one location to another can’t be done any more easily than creating a human baby (and don’t forget educating it for 20 years).
And yet its source code is much more comprehensible (and, crucially, much more maintainable) than the DNA of even a very simple single-celled organism.