OP said: “If there is no unified theory of intelligence, we are led towards the view that recursive self-improvement is not possible, since an increase in one type of intelligence does not necessarily lead to an improvement in a different type of intelligence.”
I think that some forms of self-improvement (SI) could be done without recursivety. I created a list of around 30 types of SI, starting from accelerating hardware and up to creating better plans. Most of them are not naturally recursive.
If SI will produce limited 2 times improvement on each level, and will not use recusivity option, it still enough to create 2 power 30 improvement of the system, or around 1 billion times improvement.
(Below some back of envelop Fermi like estimation, so numbers are rather random, and are given just to illustrate the idea.)
It means that near-human level AI could reach the power of around 1 billion humans without the use of recursivity option. Power of 1 billion is probably more than total power of all human science, where around 50 million researchers work.
Such AI would outperform human science 20 times, and could be counted as superintelligence. Surely, with power is more than enough to kill everybody—or to solve most of important humanity problems.
Such self- improvement is reachable without the use of the understanding of the nature of intelligence and doesn’t depend on the assumption that such understanding is needed for SI. So we can’t use the agrument about messiness of intelligence as agument for AI safety.
I think that “recursive self-improvement” means that one improvement leads the AGI to be better at improving itself, not that it must use the same trick every time. If accelerating hardware allows for better improvements over other dimensions, then better hardware is still part of recursive improvement.
Sure, but it is not easy to prove in each case. For example, if an AI increases its hardware speed two times, and buys two times more hardware, its total productivity would grow 4 times. But we can’t say that the first improvement was done because of the second.
However, if it got an idea that improving hardware is useful, it is a recursive act, as this idea helps further improvements. Moreover, it opens the field of other ideas, like the improvement of improvement. That is why I say that true recursivity is happening on ideas level, but not on the hardware level.
As the resident lesswrong “Won’t someone think of the hardware” person this comment rubs me up the wrong way a fair bit.
First there is not a well defined thing as hardware speed. Hardware speed might refer to various things clock speed, operations per second, memory bandwidth, memory response times. Depending on what your task is, your productivity might be bottle necked by one of these things and not the other. Some things like memory response times are due to the speed of signals traversing the mother board and are hard to improve while we still have the separation of memory and processing power.
Getting twice the hardware might less than twice the improvement. If there is some serial process then amdhal’s law comes into effect. If the different nodes need to make sure they have a consistent view of something you need to add latency so that sufficient numbers of them can have a good state of the data with a consensus algorithm.
Your productivity might be bottle necked by external factors not processing power at all (not getting data fast enough). This is my main beef about the sped up people thought experiment. The world is moving glacially for them and data is coming in at a trickle.
If you are searching a space, and you add more compute you might be searching less promising areas with the new compute, so you might not get twice the productivity.
I really would not expect twice the compute to lead to twice the productivity except in the most embarrassingly parallel situation like computing hashes.
I think your greater point is weakened, but not by much. We have lots of problems trying to distribute and work on problems together, so human intelligence is not purely additive either.
Thanks for elaborating, I agree that accelerating hardware twice will not actually produce twice intelligence. I used this oversimplified example of hardware acceleration as an example of non-recursive self-improvment, and diminishing returns only underlines its non-recursive nature.
OP said: “If there is no unified theory of intelligence, we are led towards the view that recursive self-improvement is not possible, since an increase in one type of intelligence does not necessarily lead to an improvement in a different type of intelligence.”
I think that some forms of self-improvement (SI) could be done without recursivety. I created a list of around 30 types of SI, starting from accelerating hardware and up to creating better plans. Most of them are not naturally recursive.
If SI will produce limited 2 times improvement on each level, and will not use recusivity option, it still enough to create 2 power 30 improvement of the system, or around 1 billion times improvement.
(Below some back of envelop Fermi like estimation, so numbers are rather random, and are given just to illustrate the idea.)
It means that near-human level AI could reach the power of around 1 billion humans without the use of recursivity option. Power of 1 billion is probably more than total power of all human science, where around 50 million researchers work.
Such AI would outperform human science 20 times, and could be counted as superintelligence. Surely, with power is more than enough to kill everybody—or to solve most of important humanity problems.
Such self- improvement is reachable without the use of the understanding of the nature of intelligence and doesn’t depend on the assumption that such understanding is needed for SI. So we can’t use the agrument about messiness of intelligence as agument for AI safety.
I think that “recursive self-improvement” means that one improvement leads the AGI to be better at improving itself, not that it must use the same trick every time.
If accelerating hardware allows for better improvements over other dimensions, then better hardware is still part of recursive improvement.
Sure, but it is not easy to prove in each case. For example, if an AI increases its hardware speed two times, and buys two times more hardware, its total productivity would grow 4 times. But we can’t say that the first improvement was done because of the second.
However, if it got an idea that improving hardware is useful, it is a recursive act, as this idea helps further improvements. Moreover, it opens the field of other ideas, like the improvement of improvement. That is why I say that true recursivity is happening on ideas level, but not on the hardware level.
Can’t recursivity be a cycle containing hardware as a node?
As the resident lesswrong “Won’t someone think of the hardware” person this comment rubs me up the wrong way a fair bit.
First there is not a well defined thing as hardware speed. Hardware speed might refer to various things clock speed, operations per second, memory bandwidth, memory response times. Depending on what your task is, your productivity might be bottle necked by one of these things and not the other. Some things like memory response times are due to the speed of signals traversing the mother board and are hard to improve while we still have the separation of memory and processing power.
Getting twice the hardware might less than twice the improvement. If there is some serial process then amdhal’s law comes into effect. If the different nodes need to make sure they have a consistent view of something you need to add latency so that sufficient numbers of them can have a good state of the data with a consensus algorithm.
Your productivity might be bottle necked by external factors not processing power at all (not getting data fast enough). This is my main beef about the sped up people thought experiment. The world is moving glacially for them and data is coming in at a trickle.
If you are searching a space, and you add more compute you might be searching less promising areas with the new compute, so you might not get twice the productivity.
I really would not expect twice the compute to lead to twice the productivity except in the most embarrassingly parallel situation like computing hashes.
I think your greater point is weakened, but not by much. We have lots of problems trying to distribute and work on problems together, so human intelligence is not purely additive either.
Thanks for elaborating, I agree that accelerating hardware twice will not actually produce twice intelligence. I used this oversimplified example of hardware acceleration as an example of non-recursive self-improvment, and diminishing returns only underlines its non-recursive nature.