The space of value systems is vast, but I don’t think the particular subspace of value systems that attempt to maximize some simple pattern (such as paperclips) is large enough in terms of probabilistic likelihood mass to even warrant discussion. And even if it was, even simple maximizers will first ride Moore’s Law if they have a long planning horizon.
The space of expansionist replicator-type value systems (intelligences which value replicating entire entity patterns similar to themselves or some component self-patterns) is a large, high likelihood cut of design space.
The goal of a replicator is to make more of itself. A rational replicator will pursue the replication path that has the highest expected exponential rate of replication for the cost, which we can analyze in economic terms.
If you actually analyze the cost of interstellar replication, it is vastly many orders of magnitude more expensive and less efficient than replicating by doubling the efficiency of your matter encoding. You can double your population/intelligence/whatever by becoming smaller, quicker and more efficient through riding Moore’s Law, and the growth rate of that strategy is vastly orders of magnitude higher than the rate of return provided by interstellar travel.
This blog post discusses some of the cost estimates of interstellar travel.
Interstellar travel only makes sense when it is the best investment option to maximize replication rate of return. Consider that long before interstellar replication is economical interplanetary expansion to the moon and mars would be exploited first. And long long before that actually becomes a wise investment, replicators will first expand to Antarctica. So why is Antarctica not colonized?
Expanding to utilize most of Earth’s mass is only rational to replicators when Moore’s Law type growth stalls completely. So hypothesizing that interstellar travel is viable is equivalent to making a long term bet about what will happen at the end of Moore’s Law.
What if Moore’s Law type inward exponential expansion has no limit? There doesn’t appear to be any real hard limit on the energy cost of computation.
A molecular level civilization could be mind boggling vast and fast itself, without even considering reversible computing and then quantum computing. Much also depends on a final unified theory of physics. There is speculation that it may be possible to re-engineer space-time itself at the fundamental level—create new universes, wormholes, etc. All of this would open possibilities that make space travel look like the antiquated dreams of small-minded bacterium.
I think it’s extremely premature to rule out all of these options and assume that future super-intelligences will suddenly hit some ultimate barrier and be forced to expand outward at a terrible snail’s pace. It’s a failure of imagination.
I think it’s extremely premature to rule out all of these options and assume that future super-intelligences will suddenly hit some ultimate barrier and be forced to expand outward at a terrible snail’s pace. It’s a failure of imagination.
It’s not a question of ruling out the scenario, just driving down its probability to low levels.
Current physics indicates that we can’t increase computation indefinitely in this way. It may be wrong, but that’s the place to put most of our probability mass. When we consider new physics, they might increase the returns to colonization (e.g. more computation using bigger black holes) or have little effect, with only a portion of our probability mass going to the “vast inner expansion” scenarios.
Even in those scenarios, there’s still the intelligence explosion dynamic to consider. At each level of computational efficiency it may be relatively easy or hard to push onwards to the next level: there might be many orders of magnitude of easy gains followed by some orders of difficult ones, and so forth. As long as there are bottlenecks somewhere along the technology trajectory, civilizations should spend most of their time there, and would benefit from additional resources to advance through the bottlenecks.
Combining these factors, you’re left with a possibility that seems to be non-vanishing but also small.
Current physics indicates that we can’t increase computation indefinitely in this way.
This is not clear at all.
Current physics posits no ultimate minimal energy requirement for computation. With reversible computing, a couple of watts could perform any amount of computation. The upper theoretical limit is infinity. The limits are purely practical, not theoretical.
There is also quantum computation to consider:
A quantum computer with a given number of qubits is exponentially more complex than a classical computer with the same number of bits because describing the state of n qubits requires 2^n complex coefficients. . . .For example, a 300-qubit quantum computer has a state described by 2^300 (approximately 10^90) complex numbers, more than the number of atoms in the observable universe.
Why do you think that mass/energy is ultimately important? And finally, there are the more radical possibilities of space-time engineering.
When we consider new physics, they might increase the returns to colonization (e.g. more computation using bigger black holes) or have little effect, with only a portion of our probability mass going to the “vast inner expansion” scenarios.
I don’t follow your logic.
When we consider new physics, they could do any number of things. The most likely is to increase utilization of current matter/energy. They could also allow the creation of matter/energy. Any of these would further increase the rate of return of acceleration over expansion. And acceleration already starts with a massive lead. The only new physics which would appear at first to favor colonization is speed of light circumvention. But depending on the details that could actually also favor acceleration over expansion.
As long as there are bottlenecks somewhere along the technology trajectory, civilizations should spend most of their time there, and would benefit from additional resources to advance through the bottlenecks
I don’t see any benefit. A colony 10 light-years away would be more or less inaccessible for accelerated hyper-intelligences in terms of bandwidth and latency.
The possible benefit seems to be satisficing the idea that you have replicated, and or possibly travelling to new regions where you have better growth opportunities.
Replicators might be a tiny part of AI-space, while still being quite a large part of the space of AIs likely to be invented by biologically evolved organisms.
What if Moore’s Law type inward exponential expansion has no limit? There doesn’t appear to be any real hard limit on the energy cost of computation.
The entire scenario of this post rests on this “what if,” and it’s not a very probable one. There appear to be hard theoretical limits to the speed of computation and the amount of computation that can be performed with a given amount of energy, and there may easily be practical limitations which set the bounds considerably lower. Assuming that there are limits is the default position, and in an intelligence explosion, it’s quite likely that the AI will reach those limits quite quickly, unless the resources available on Earth alone do not allow for it.
That wiki entry is wrong and or out of date. It only considers strictly classical irreversible computation. It doesn’t mention quantum and reversible computation.
But as to the larger question—yes I think there are probably eventual limits, but even this can not yet be said for certain until we have a complete unified theory of physics: quantum gravity and what not.
From what we do understand of current physics, the limits of computation take us down to singularities, regions of space time similar to the big bang: black holes, wormholes, etc type objects, which are not fully understood in current physics.
Also, the larger trend towards greater complexity is not really dependent on computational growth per se. At the higher level of abstraction, the computational resources of the earth haven’t changed much since it’s formation. All of the complexity increase since then has been various forms of reorganization of matter/energy patterns. Increasing computational density is just one form of complexity increasing transformation. Complexity can continue to increase at many other levels of organization (software, mental, knowledge, organizational, meta, etc)
So the more important general question is this: is there an absolute final limit to the future complexity of the earth system? And if we reach that, what happens next?
I’m not sure what you mean exactly. For classic computing memory will grow exponentially down to the molecular scale. Past that there are qubits and quantum compression. I’m not quite sure how that ends or what could be past it.
What I meant was that reversible computation doesn’t come for free.
You have to be able to reverse the whole of the computation. If you have inputs coming from the outside you have to have thermodynamically random bits for each input bit (you can then reverse the computation by exposing them to random fluctuations).
If you don’t have the pool of randomised bits you have overwrite known bits, which is irreversible.
Depending upon how many randomised bits you start out with, you will run out of them sooner or later and then you will have to increase your memory in real time (spending less energy to do so that using irreversible computation).
The space of value systems is vast, but I don’t think the particular subspace of value systems that attempt to maximize some simple pattern (such as paperclips) is large enough in terms of probabilistic likelihood mass to even warrant discussion. And even if it was, even simple maximizers will first ride Moore’s Law if they have a long planning horizon.
The space of expansionist replicator-type value systems (intelligences which value replicating entire entity patterns similar to themselves or some component self-patterns) is a large, high likelihood cut of design space.
The goal of a replicator is to make more of itself. A rational replicator will pursue the replication path that has the highest expected exponential rate of replication for the cost, which we can analyze in economic terms.
If you actually analyze the cost of interstellar replication, it is vastly many orders of magnitude more expensive and less efficient than replicating by doubling the efficiency of your matter encoding. You can double your population/intelligence/whatever by becoming smaller, quicker and more efficient through riding Moore’s Law, and the growth rate of that strategy is vastly orders of magnitude higher than the rate of return provided by interstellar travel.
This blog post discusses some of the cost estimates of interstellar travel.
Interstellar travel only makes sense when it is the best investment option to maximize replication rate of return. Consider that long before interstellar replication is economical interplanetary expansion to the moon and mars would be exploited first. And long long before that actually becomes a wise investment, replicators will first expand to Antarctica. So why is Antarctica not colonized?
Expanding to utilize most of Earth’s mass is only rational to replicators when Moore’s Law type growth stalls completely. So hypothesizing that interstellar travel is viable is equivalent to making a long term bet about what will happen at the end of Moore’s Law.
What if Moore’s Law type inward exponential expansion has no limit? There doesn’t appear to be any real hard limit on the energy cost of computation.
A molecular level civilization could be mind boggling vast and fast itself, without even considering reversible computing and then quantum computing. Much also depends on a final unified theory of physics. There is speculation that it may be possible to re-engineer space-time itself at the fundamental level—create new universes, wormholes, etc. All of this would open possibilities that make space travel look like the antiquated dreams of small-minded bacterium.
I think it’s extremely premature to rule out all of these options and assume that future super-intelligences will suddenly hit some ultimate barrier and be forced to expand outward at a terrible snail’s pace. It’s a failure of imagination.
It’s not a question of ruling out the scenario, just driving down its probability to low levels.
Current physics indicates that we can’t increase computation indefinitely in this way. It may be wrong, but that’s the place to put most of our probability mass. When we consider new physics, they might increase the returns to colonization (e.g. more computation using bigger black holes) or have little effect, with only a portion of our probability mass going to the “vast inner expansion” scenarios.
Even in those scenarios, there’s still the intelligence explosion dynamic to consider. At each level of computational efficiency it may be relatively easy or hard to push onwards to the next level: there might be many orders of magnitude of easy gains followed by some orders of difficult ones, and so forth. As long as there are bottlenecks somewhere along the technology trajectory, civilizations should spend most of their time there, and would benefit from additional resources to advance through the bottlenecks.
Combining these factors, you’re left with a possibility that seems to be non-vanishing but also small.
This is not clear at all.
Current physics posits no ultimate minimal energy requirement for computation. With reversible computing, a couple of watts could perform any amount of computation. The upper theoretical limit is infinity. The limits are purely practical, not theoretical.
There is also quantum computation to consider:
Why do you think that mass/energy is ultimately important? And finally, there are the more radical possibilities of space-time engineering.
I don’t follow your logic.
When we consider new physics, they could do any number of things. The most likely is to increase utilization of current matter/energy. They could also allow the creation of matter/energy. Any of these would further increase the rate of return of acceleration over expansion. And acceleration already starts with a massive lead. The only new physics which would appear at first to favor colonization is speed of light circumvention. But depending on the details that could actually also favor acceleration over expansion.
I don’t see any benefit. A colony 10 light-years away would be more or less inaccessible for accelerated hyper-intelligences in terms of bandwidth and latency.
The possible benefit seems to be satisficing the idea that you have replicated, and or possibly travelling to new regions where you have better growth opportunities.
Replicators might be a tiny part of AI-space, while still being quite a large part of the space of AIs likely to be invented by biologically evolved organisms.
The entire scenario of this post rests on this “what if,” and it’s not a very probable one. There appear to be hard theoretical limits to the speed of computation and the amount of computation that can be performed with a given amount of energy, and there may easily be practical limitations which set the bounds considerably lower. Assuming that there are limits is the default position, and in an intelligence explosion, it’s quite likely that the AI will reach those limits quite quickly, unless the resources available on Earth alone do not allow for it.
That wiki entry is wrong and or out of date. It only considers strictly classical irreversible computation. It doesn’t mention quantum and reversible computation.
But as to the larger question—yes I think there are probably eventual limits, but even this can not yet be said for certain until we have a complete unified theory of physics: quantum gravity and what not.
From what we do understand of current physics, the limits of computation take us down to singularities, regions of space time similar to the big bang: black holes, wormholes, etc type objects, which are not fully understood in current physics.
Also, the larger trend towards greater complexity is not really dependent on computational growth per se. At the higher level of abstraction, the computational resources of the earth haven’t changed much since it’s formation. All of the complexity increase since then has been various forms of reorganization of matter/energy patterns. Increasing computational density is just one form of complexity increasing transformation. Complexity can continue to increase at many other levels of organization (software, mental, knowledge, organizational, meta, etc)
So the more important general question is this: is there an absolute final limit to the future complexity of the earth system? And if we reach that, what happens next?
Can you explain what this complexity is and why you want so much of it?
See my other recent reply on our other thread.
Are you assuming the memory growing in proportion to your input bandwidth?
I’m not sure what you mean exactly. For classic computing memory will grow exponentially down to the molecular scale. Past that there are qubits and quantum compression. I’m not quite sure how that ends or what could be past it.
What I meant was that reversible computation doesn’t come for free.
You have to be able to reverse the whole of the computation. If you have inputs coming from the outside you have to have thermodynamically random bits for each input bit (you can then reverse the computation by exposing them to random fluctuations).
If you don’t have the pool of randomised bits you have overwrite known bits, which is irreversible.
Depending upon how many randomised bits you start out with, you will run out of them sooner or later and then you will have to increase your memory in real time (spending less energy to do so that using irreversible computation).