Wait, are you saying that there was an infinite rate of technological improvement at time zero?
The change I am talking about is at the highest level—simply change in pattern complexity. The initial symmetry breaking and appearance of the fundamental forces is a fundamental change and upwards increase in complexity, as are all the other historical events in the cosmic calendar. The appearance of electrons is just as real of a change, and is of the same category, as the appearance of life, brains, or typewriters.
Patterns may require minds to recognize them, but that doesn’t make them any less real. Minds recognize them because they are complex statistical correlations in space-time structure. Ultimately they are the only thing which is real.
If you look at the very first changes they are happening on the plank scale 10^-43 seconds after 0, and the initial region around 0 is an actual Singularity. After that the time between events increases exponentially .. . corresponding to a sharp slowdown in the rate of change as the universe expands.
Eventually you get to this midpoint, and then in some local pockets the trend reverses and changes begin accelerating again.
The shape of the rate of pattern-change or historical events is thus a U shape, it starts out with an infinity at 0, a vertical asymptote, bottoms out in the middle, and now is climbing back up towards another vertical asymptote where changes again happen at the plank scale—and then beyond that we get another singularity.
It’s not an exponential or a sigmoid—those aren’t nearly steep enough.
The time between events near the big bang is 1 / t. The time between local events on earth is following that pattern in reverse, something like 1 / (B-t), where B is some arbitrary constant.
and the overall pattern seems to be something like:
(1/(A+t)) + (1/(B-t)), where A is just 0 and is the initial Big Bang Singularity, and B is a local future time singularity.
You seem to really like a certain concept, without knowing quite what that concept is. I would call this an affective death spiral. I will call this concept awesomeness. You think of awesomeness as a number, a function of time, that roughly corresponds to the rate of occurrence of “significant events”.
The main problem with this is that awesomeness isn’t fundamental. It must emerge somehow out of the laws of physics. This means that it can break down in certain circumstances. No matter how awesome I think Newtonian mechanics is, it’s going to stop working at high speeds rather than going to infinity. You can only really be confident in a law holding in a certain region if you’ve observed it working in that region or you know how it emerges from deeper laws, even approximately. However, awesomeness emerges in a very messy way. Surely it doesn’t always follow the equations you propose; if humans extinguished themselves with nuclear weapons or nanotechnology tomorrow, awesomeness would go down to almost zero. An overall pattern like this can easily break down.
If you look at the very first changes they are happening on the plank scale 10^-43 seconds after 0, and the initial region around 0 is an actual Singularity.
This is very death-spirally. A few related variables go to infinity, and only in models that admit to having no idea what’s going on there. There aren’t any infinities in the Hawking-Hartle wavefunction, AFAIK. You just jumped on the word singularity.
The time between events near the big bang is 1 / t. The time between local events on earth is following that pattern in reverse, something like 1 / (B-t), where B is some arbitrary constant.
By your own logic, awesomeness will therefore become negative after the singularity.
Patterns may require minds to recognize them, but that doesn’t make them any less real. Minds recognize them because they are complex statistical correlations in space-time structure. Ultimately they are the only thing which is real.
Awesomeness is a highly complex combination of a ridiculous number of variables. It is an abstraction.
I didn’t mean to imply that a Singularity implies an actual infinity, but rather a region for which we do not yet have complete models. My central point is that a wealth of data simply show that we appear to be heading towards something like a localized singularity—a maximally small, fast, compression of local complexity. The words “appear” and “heading towards” are key.
Surely it doesn’t always follow the equations you propose; if humans extinguished themselves with nuclear weapons or nanotechnology tomorrow, awesomeness would go down to almost zero.
Nothing about that trend is inevitable, and as I mentioned several times the acceleration trend is localized rather than global, in most regions the trend doesn’t exist or peters out. Your criticism that it “doesnt always follow the equations you propose” (presumably by doesn’t you mean across all of space), is not a criticism of any point I actually made—I completely agree. I should have made it more clear, but that extremely simple type of equation would only even be roughly valid for small localized spatial regions. Generalizing it across the whole universe would require adding some spatial variation so that most regions feature no growth trend. And for all we know the trend on earth will peter out at some point in the future long before hitting some end maximal singularity in complexity.
By your own logic, awesomeness will therefore become negative after the singularity.
Rather, the model breaks down at the singularity, and something else happens.
Awesomeness is a highly complex combination of a ridiculous number of variables. It is an abstraction.
Of course. But that is how we model and make predictions. The idea that there is no overall change in complexity over time is just another model, and it clearly fails all postdictions and makes nonsensical short-term predictions. The geometric model makes accurate postdictions and makes powerful predictions that fit predictions made from smaller scale and more specific models (such as the predictions we can make from development of AGI).
The idea that there is no overall change in complexity over time is just another model, and it clearly fails all postdictions and makes nonsensical short-term predictions.
I never said that there is no change in complexity over time; I just said that some trends in technological growth, such as Moore’s law, will stop too soon for your predictions to work.
You are saying that the singularity is a breakdown of our models rather than a literally infinite rate of grouwth, but earlier you said
Why should exponential acceleration ever peter out? It’s the overall mega-pattern over all of history to date.
and
If you plot it in terms of economic growth, computational growth or just complexity growth, the overall trend of the cosmic calendar is geometric—it ends with an infinity/singularity. I take this as general evidence against acceleration ever ending.
Those were the things that seemed death-spirally to me, but they also seem to contradict what you are saying now. What am I misunderstanding?
You are saying that the singularity is a breakdown of our models rather than a literally infinite rate of growth, but earlier you said
The general change in complexity over time follows a surprisingly predictable pattern or trend. The resulting model predicts that local complexity will continue to accelerate in some narrow branches or sub-pockets of the universe towards a vertical asymptote, where it approaches infinity—a Singularity. We can understand this computationally as the end result of a long chain of recursive self-optimization driving computational systems down to smaller and faster scales until you eventually hit the plank scale barrier. The ultimate physical computer necessarily resembles a small piece of the big bang—a physical Singularity/black hole like entity. Computation/intelligence/complexity approaches infinity within this localized pocket, and at that moment in that region the model breaks down and “something strange happens”. Perhaps this involves the creation of new universes. If that is possible, that would allow complexity to continue to increase without bound in the newly generated bubble universes. So the term Singularity in this model has a very specific physical meaning—as in an actual space-time Singularity resembling a black hole or the Big Bang. That is why I call it “physical singularity”—I don’t mean some vague analogy like “greater than human intelligence”. The physics of singularities is not yet fully determined, so exactly what future hyper-intelligences could do at that level is open/unknown.
The ultimate physical computer necessarily resembles a small piece of the big bang—a physical Singularity/black hole like entity.
Because they are both very dense? That’s hardly a resemblance. You keep making analogies like this, but I do not see what purpose they serve.
Computation/intelligence/complexity approaches infinity within this localized pocket, and at that moment in that region the model breaks down and “something strange happens”. Perhaps this involves the creation of new universes. If that is possible, that would allow complexity to continue to increase without bound in the newly generated bubble universes.
If the model breaks down, than it provides almost no evidence as to whether new universes can be created. This behaviour seems to fit the model better, but, since we already know that the model breaks down, we cannot use it to justify any such predictions.
So the term Singularity in this model has a very specific physical meaning—as in an actual space-time Singularity resembling a black hole or the Big Bang.
We don’t even know if there are singularities at the centre of black holes or at the big bang. Even if there were, there would be no reason to expect a similar singularity would be a necessary part of advanced technology. I do not see how you deduced this and it seems to only be a part of your argument because this phenomenon is described by the same word as a technological singularity.
The ultimate physical computer necessarily resembles a small piece of the big bang—a physical Singularity/black hole like entity.
Because they are both very dense? That’s hardly a resemblance.
I’m not sure how you mean “that’s hardly a resemblance”. If the ultimate physical computer is dense enough to be a gravitational singularity, that is a black hole singularity by definition, not just resemblance. Lookup Set Lloy’ds paper “the ultimate physical limits of computation” for the physics reference on why ultimate computers necessarily involve physical black holes/singularities.
If the model breaks down, than it provides almost no evidence as to whether new universes can be created.
No, for more indirect speculative evidence we will have to wait for physics to advance, which may take a while (at least until AGI comes up to speed). However, this particular type of speculation the model suggests is linked to ideas in physics—see chaotic inflation/bubble universe, selfish biocosm/fecund universe theory, and John Smart’s developmental singularity idea for the overview.
We don’t even know if there are singularities at the centre of black holes or at the big bang.
Singularity here just means model-breakdown and ‘things going to infinity’. If new models remove the infinity than perhaps the ‘Singularity’ goes away, but you still have something approaching infinity. Regardless in the meantime the word “Singularity” is employed.
Even if there were, there would be no reason to expect a similar singularity would be a necessary part of advanced technology.
There are specific, detailed, physical reasons why singularities are natural endpoints to ultimate computational technology-in-theory. (namely they are maximal entropy states, and computation is ultimately entropy-limited—but see earlier mentioned work). Of course, that doesn’t mean that the ultimate practical computational systems will be black holes, but still.
I do not see how you deduced this and it seems to only be a part of your argument because this phenomenon is described by the same word as a technological singularity.
Whoever originally coined the term (Vernor Vinge?) picked Singularity specifically because of the association with model-breakdown in math/physics, but was probably not aware of the full connection to ultimate computational physics, as those results weren’t developed or understood until considerably later.
I am familiar with the work of Seth Lloyd (and that of Wei Dai) on the usefulness of black holes in computing. The singularity in black holes is a different issue than this usefulness.
I read something here recently with a good analogy for this. If someone thinks a whale is a fish, then fishiness is a quality that they would ascribe to whales, but it is not part of their definition of a whale, so they would stop saying that whales were fish if presented with conflicting evidence. Similarly, we have two issues here, technological singularities and mathematical singularities. It turns out that the latter might be useful for the purposes of the former, but it is not part of the definition of the former. I do not know what purpose you are bringing this up for. I feel like we are discussing the behaviour of whales and you keep mentioning that they are mammals. It is true according to our latest science, but it seems irrelevant. In particular, you did not link the claims you made earlier about technology accelerating forever to this discussion of black holes.
You originally said
If you plot it in terms of economic growth, computational growth or just complexity growth, the overall trend of the cosmic calendar is geometric—it ends with an infinity/singularity. I take this as general evidence against acceleration ever ending.
You have since said that the singularity is not literally an infinity, but a breakdown of our models. When I pressed you on this contradiction, you did not really respond, but brought in other issues about black holes and bubble universes, including many extremely speculative proposals. What is your position on this?
I am familiar with the work of Seth Lloyd (and that of Wei Dai) on the usefulness of black holes in computing.
The former work is the particular use connected to our discussion (black hole computers). The second work (Wei Dai’s) is about black hole’s potential use as radiators/entropy dumps.
The singularity in black holes is a different issue than this usefulness.
No, it is the same. The speed and efficiency limitations of computation stem from the speed of light communication barrier, and thus they scale with density (inversely with size). Moore’s law is an exponential increase in information density. If you continue to increase density (packing more information into less space) eventually it leads you to the Bekenstein Bound and a black hole, a gravitational singularity.
If you plot it in terms of economic growth, computational growth or just complexity growth, the overall trend of the cosmic calendar is geometric—it ends with an infinity/singularity. I take this as general evidence against acceleration ever ending.
You have since said that the singularity is not literally an infinity, but a breakdown of our models. When I pressed you on this contradiction, you did not really respond, but brought in other issues about black holes and bubble universes, including many extremely speculative proposals. What is your position on this?
I believe I outlined it in previous replies—if Moore’s Law type exponential information processing density continues to increase past the barrier of molecular computing this eventually leads to (requires) space-time engineering at the level of artificial gravitational singularities (black holes). All of future physics is speculative, but many branches of current speculation in physics for ultimate technologies involve manipulating gravitational singularities. Some possibilities include the creation of new bubble universes which would allow the overall pattern to replicate and continue inside the new universes—a form of multiversal replication—the developmental singularity idea.
Yes, this is all speculation, as is any theory of physical eschatology (such as the theory that we will eventually colonize the galaxy). The original start of all of this was the observation that colonizing the galaxy would amount to an extremely slow rate of growth compared to the historical trend. Growth at the historical pace will require (or predicts) something more radical such as space-time engineering/universal replication.
The former work is the particular use connected to our discussion (black hole computers). The second work (Wei Dai’s) is about black hole’s potential use as radiators/entropy dumps.
The ability to dumping entropy is essential to computing, so Wei Dai’s work is relevant. Limits on entropy dumping provide limits on computation.
All of future physics is speculative, but many branches of current speculation in physics for ultimate technologies involve manipulating gravitational singularities. Some possibilities include the creation of new bubble universes which would allow the overall pattern to replicate and continue inside the new universes—a form of multiversal replication—the developmental singularity idea.
It is possible that we will gain technology that allows us to vastly increase our computing power beyond what is currently known to be possible in principle, but these speculations are only a subset of possible futures. The universe has to be a certain way, and there is no reason to prefer these hypotheses to any others.
The prior probability of unknown physics that lets Moore’s law continue is therefore low.
Yes, this is all speculation, as is any theory of physical eschatology (such as the theory that we will eventually colonize the galaxy). The original start of all of this was the observation that colonizing the galaxy would amount to an extremely slow rate of growth compared to the historical trend. Growth at the historical pace will require (or predicts) something more radical such as space-time engineering/universal replication.
If we observe a trend but we can explain the trend and the explanation point to a specific time where the trend breaks down, then a hypothesis that invokes some effect to make the trend continues does no better a job of explaining our observations then a hypothesis that results in a prediction that the trend will stop.
The odds ratio is therefore about 1:1. This trend gives little evidence. The posterior probability of unknown physics that lets Moore’s law continue is therefore low.
The ability to dumping entropy is essential to computing, so Wei Dai’s work is relevant. Limits on entropy dumping provide limits on computation.
Actually this is not generally true. The ability to dump entropy .. . is simply the ability to dump entropy. In the current dominant framework of irreversible deterministically programmable von neuman architectures, entropy is dumped left and right. Moore’s law for traditional computing will run into this landauer limit relatively soon—this decade or next at the latest, and it will come to a hard end.
However, many algorithms can actually use entropy. Any type of algorithm that can use about as much entropy as it produces can trivially be made fully reversible and approach asymptotic zero net energy dissipation. Monte carlo simulation is a prototypical example, and entropy has similar uses in pattern prediction from compressed knowledge in the domain of AI algorithms.
Furthermore, advanced physics simulations of the type that future upload civilizations would desire can be made trivially reversible because physics itself is reversible. Any state updates and differential equations used in physics simulation are thus reversible and need not even produce any waste entropy. This combined with the potential positive uses of any actual entropy could allow computation in general to continue to advance. The limit only applies to specific classes of computation, and fortunately the most important future domains of massive computation (general simulation and related general intelligence) are fully reversible at zero penalty.
Yes, approaching those limits will require very low temperatures and there will always be some random entropy coming in from the outside on the surface of the computer, but this surface can simply be used as a source entropy circuit.
And finally, moving from deterministic to nondeterministic statistical computation in general further eliminates potential problems with entropy.
Of course there are other limits: there is a fundamental final limit based on QM quantization and the uncertainty principle in the minimum energy required to represent a bit and compute a “bit op”.
That limit is very far away, but miniaturization limits of building any structures out of atoms places a closer soft limit in terms of the energy density that can be contained in a molecular structure. This may limit regular computing out of safe everyday materials to chemical bond energy densities, but we exceed those densities in nuclear reactors and eventually we could achieve those energy densities in computation. And again if the computation is reversible and all entropy is recycled it need not generate any heat (although the result of catastrophic failure of such a system could result in a nuclear-level accident, so this severely constrains the practical economics).
Looking farther ahead we can see that the uncertainty principle does not say that 1 quantum of energy can only use or compute 1 bit. In fact the limits are unimaginably more generous. An interaction (such as a collision) of 2 particles with N bit-states can have on the order 2^N possible output states, so the final ladder is to turn each individual particle into a complex functional mapping or small computer unto-itself. If climbing that ladder is ever practically possible (and it appears to be), it may not technically lead to infinity but it’s close enough. This is all with known physics.
You bring up some interesting points. I do not know whether minds could be made fully reversible in practice (obviously it’s possible in principle, since physics is reversible). The question, however, is not whether negentropy use can be lowered but whether it can be lowered to the point that a different resource, one which does not follow the M^2 power law, is the limiting one. If negentropy use can be lowered, what is the new limiting factor?
For example, you mentioned that many technologies require low temperatures. However, in the absence of perfect shielding against the CMB, this requires a cooling system, which is the same thing as an entropy absorber. The limiting resource in this case is still entropy.
You did not respond the my statement that the posterior probability of unknown physics that lets Moore’s law continue is low. Does this mean you agree? If not, where is the flaw in my argument?
If negentropy use can be lowered, what is the new limiting factor?
I imagine there will always be limiting factors, but they change with knowledge and technology.
I’m fairly sure that entropy can be recycled/managed well enough that heat/entropy issues will not be end limiters. In fact you could probably take reversible computing and entropy recycling to an extreme and make a computer that actually emitted negative net heat—absorbed entropy from the environment. I’m not sure that future hyperintelligences will necessarily have any need for the cold vacuum.
In fact, ‘entropy’ comes in many different forms. Cosmic rays are particularly undesirable forms of entropy, micrometeorites more so, and then large asteroids and supernovas are just extremums on this same scale. There is always something. A planetary atmosphere and crust provides some nice free armor.
But anyway, I digress. I’m not even absolutely certain there will always be limiting factors, but I’d bet that way. I’d bet that in the long term rare materials are a limiting factor, energy cost is still a limiting factor—but mainly just in terms of energy costs of construction rather than operation, and isolation/cooling/protection is something of a limiting factor, but these may be looking at the problem in the wrong light.
Bigger limiting factors for future hyper-intelligences may be completely non-material—such as proximity to exiting knowledge/computational clusters, and ultimately—novelty (new information).
For example, you could compute a googleplex per second and still be the dumbest hyperintelligence on the block if you are stuck with only human sensory capacities and a slow, high latency connection to other hyperintelligences and knowledge sources.
You did not respond the my statement that the posterior probability of unknown physics that lets Moore’s law continue is low. Does this mean you agree? If not, where is the flaw in my argument?
I’ve thought a little more on how to assign a likelihood to known physics (bayesian evidence and a universal prior) and it led me to the inescapable conclusion that we are still a ways away from final physics. In fact, in the process I’ve been reading up more on QM and it led me to realize that whole tracts of it are .. on the wrong track.
The universal prior as applied to physics is a whole topic in of itself, but it is the best guiding principle as to what ultimate final physics will allow. Creation of baby universes is dependent on GR and a prediction of loop quantum gravity in particular, I haven’t gotten to those maths yet. A more basic first question might be something like—which is more a prior likely—analog or digital and by how much? I’m betting digital, but if analog is not ruled out by the UP it could allow for unlimited local computation in principle, as one example.
I’m fairly sure that entropy can be recycled/managed well enough that heat/entropy issues will not be end limiters. In fact you could probably take reversible computing and entropy recycling to an extreme and make a computer that actually emitted negative net heat—absorbed entropy from the environment.
That violates the second law of thermodynamics unless you discover an infinite heat sink, which requires a specific type of new physics.
But anyway, I digress. I’m not even absolutely certain there will always be limiting factors, but I’d bet that way. I’d bet that in the long term rare materials are a limiting factor, energy cost is still a limiting factor—but mainly just in terms of energy costs of construction rather than operation, and isolation/cooling/protection is something of a limiting factor, but these may be looking at the problem in the wrong light.
Bigger limiting factors for future hyper-intelligences may be completely non-material—such as proximity to exiting knowledge/computational clusters, and ultimately—novelty (new information).
This all depends on what is being limited by these factors, which is your values. If you value sentient life, you need computing power. If you value novelty and learning, you also need computing power, but there might be diminishing returns (of course, it is not inconsistent to value sentience with diminishing returns, though most humans who do are inconstant).
I’ve thought a little more on how to assign a likelihood to known physics (bayesian evidence and a universal prior) and it led me to the inescapable conclusion that we are still a ways away from final physics. In fact, in the process I’ve been reading up more on QM and it led me to realize that whole tracts of it are .. on the wrong track.
I’m skeptical of this. Can you show your work? I’m particularly doubtful of your opinions on QM, unless they’re based on some interesting point about induction, in which case I’m only as doubtful of that as I am of the rest of this paragraph.
Creation of baby universes is . . . a prediction of loop quantum gravity in particular, I haven’t gotten to those maths yet.
No, the only thing baby universes and LQG have in common is that Lee Smolin studies them. He hypothesized baby universes not based on LQG, but because they allow a form of natural selection that has a chance of predicting life-filled universes without having to think about anthropic considerations. This seems like a horribly confused reason. The theory has no evidence in its favour, so it probability is not higher than its prior. In fact, according to Smolin’s Wikipedia page, it has been falsified by a discovery that the mass of the strange quark is not tuned for optimal black hole production.
A more basic first question might be something like—which is more a prior likely—analog or digital and by how much? I’m betting digital, but if analog is not ruled out by the UP it could allow for unlimited local computation in principle, as one example.
If a prior prohibits an analog universe, than it is a suboptimal prior.
The change I am talking about is at the highest level—simply change in pattern complexity. The initial symmetry breaking and appearance of the fundamental forces is a fundamental change and upwards increase in complexity, as are all the other historical events in the cosmic calendar. The appearance of electrons is just as real of a change, and is of the same category, as the appearance of life, brains, or typewriters.
Patterns may require minds to recognize them, but that doesn’t make them any less real. Minds recognize them because they are complex statistical correlations in space-time structure. Ultimately they are the only thing which is real.
If you look at the very first changes they are happening on the plank scale 10^-43 seconds after 0, and the initial region around 0 is an actual Singularity. After that the time between events increases exponentially .. . corresponding to a sharp slowdown in the rate of change as the universe expands.
Eventually you get to this midpoint, and then in some local pockets the trend reverses and changes begin accelerating again.
The shape of the rate of pattern-change or historical events is thus a U shape, it starts out with an infinity at 0, a vertical asymptote, bottoms out in the middle, and now is climbing back up towards another vertical asymptote where changes again happen at the plank scale—and then beyond that we get another singularity.
It’s not an exponential or a sigmoid—those aren’t nearly steep enough.
The time between events near the big bang is 1 / t. The time between local events on earth is following that pattern in reverse, something like 1 / (B-t), where B is some arbitrary constant.
and the overall pattern seems to be something like: (1/(A+t)) + (1/(B-t)), where A is just 0 and is the initial Big Bang Singularity, and B is a local future time singularity.
You seem to really like a certain concept, without knowing quite what that concept is. I would call this an affective death spiral. I will call this concept awesomeness. You think of awesomeness as a number, a function of time, that roughly corresponds to the rate of occurrence of “significant events”.
The main problem with this is that awesomeness isn’t fundamental. It must emerge somehow out of the laws of physics. This means that it can break down in certain circumstances. No matter how awesome I think Newtonian mechanics is, it’s going to stop working at high speeds rather than going to infinity. You can only really be confident in a law holding in a certain region if you’ve observed it working in that region or you know how it emerges from deeper laws, even approximately. However, awesomeness emerges in a very messy way. Surely it doesn’t always follow the equations you propose; if humans extinguished themselves with nuclear weapons or nanotechnology tomorrow, awesomeness would go down to almost zero. An overall pattern like this can easily break down.
This is very death-spirally. A few related variables go to infinity, and only in models that admit to having no idea what’s going on there. There aren’t any infinities in the Hawking-Hartle wavefunction, AFAIK. You just jumped on the word singularity.
By your own logic, awesomeness will therefore become negative after the singularity.
Awesomeness is a highly complex combination of a ridiculous number of variables. It is an abstraction.
I didn’t mean to imply that a Singularity implies an actual infinity, but rather a region for which we do not yet have complete models. My central point is that a wealth of data simply show that we appear to be heading towards something like a localized singularity—a maximally small, fast, compression of local complexity. The words “appear” and “heading towards” are key.
Nothing about that trend is inevitable, and as I mentioned several times the acceleration trend is localized rather than global, in most regions the trend doesn’t exist or peters out. Your criticism that it “doesnt always follow the equations you propose” (presumably by doesn’t you mean across all of space), is not a criticism of any point I actually made—I completely agree. I should have made it more clear, but that extremely simple type of equation would only even be roughly valid for small localized spatial regions. Generalizing it across the whole universe would require adding some spatial variation so that most regions feature no growth trend. And for all we know the trend on earth will peter out at some point in the future long before hitting some end maximal singularity in complexity.
Rather, the model breaks down at the singularity, and something else happens.
Of course. But that is how we model and make predictions. The idea that there is no overall change in complexity over time is just another model, and it clearly fails all postdictions and makes nonsensical short-term predictions. The geometric model makes accurate postdictions and makes powerful predictions that fit predictions made from smaller scale and more specific models (such as the predictions we can make from development of AGI).
I never said that there is no change in complexity over time; I just said that some trends in technological growth, such as Moore’s law, will stop too soon for your predictions to work.
You are saying that the singularity is a breakdown of our models rather than a literally infinite rate of grouwth, but earlier you said
and
Those were the things that seemed death-spirally to me, but they also seem to contradict what you are saying now. What am I misunderstanding?
The general change in complexity over time follows a surprisingly predictable pattern or trend. The resulting model predicts that local complexity will continue to accelerate in some narrow branches or sub-pockets of the universe towards a vertical asymptote, where it approaches infinity—a Singularity. We can understand this computationally as the end result of a long chain of recursive self-optimization driving computational systems down to smaller and faster scales until you eventually hit the plank scale barrier. The ultimate physical computer necessarily resembles a small piece of the big bang—a physical Singularity/black hole like entity. Computation/intelligence/complexity approaches infinity within this localized pocket, and at that moment in that region the model breaks down and “something strange happens”. Perhaps this involves the creation of new universes. If that is possible, that would allow complexity to continue to increase without bound in the newly generated bubble universes. So the term Singularity in this model has a very specific physical meaning—as in an actual space-time Singularity resembling a black hole or the Big Bang. That is why I call it “physical singularity”—I don’t mean some vague analogy like “greater than human intelligence”. The physics of singularities is not yet fully determined, so exactly what future hyper-intelligences could do at that level is open/unknown.
Because they are both very dense? That’s hardly a resemblance. You keep making analogies like this, but I do not see what purpose they serve.
If the model breaks down, than it provides almost no evidence as to whether new universes can be created. This behaviour seems to fit the model better, but, since we already know that the model breaks down, we cannot use it to justify any such predictions.
We don’t even know if there are singularities at the centre of black holes or at the big bang. Even if there were, there would be no reason to expect a similar singularity would be a necessary part of advanced technology. I do not see how you deduced this and it seems to only be a part of your argument because this phenomenon is described by the same word as a technological singularity.
I’m not sure how you mean “that’s hardly a resemblance”. If the ultimate physical computer is dense enough to be a gravitational singularity, that is a black hole singularity by definition, not just resemblance. Lookup Set Lloy’ds paper “the ultimate physical limits of computation” for the physics reference on why ultimate computers necessarily involve physical black holes/singularities.
No, for more indirect speculative evidence we will have to wait for physics to advance, which may take a while (at least until AGI comes up to speed). However, this particular type of speculation the model suggests is linked to ideas in physics—see chaotic inflation/bubble universe, selfish biocosm/fecund universe theory, and John Smart’s developmental singularity idea for the overview.
Singularity here just means model-breakdown and ‘things going to infinity’. If new models remove the infinity than perhaps the ‘Singularity’ goes away, but you still have something approaching infinity. Regardless in the meantime the word “Singularity” is employed.
There are specific, detailed, physical reasons why singularities are natural endpoints to ultimate computational technology-in-theory. (namely they are maximal entropy states, and computation is ultimately entropy-limited—but see earlier mentioned work). Of course, that doesn’t mean that the ultimate practical computational systems will be black holes, but still.
Whoever originally coined the term (Vernor Vinge?) picked Singularity specifically because of the association with model-breakdown in math/physics, but was probably not aware of the full connection to ultimate computational physics, as those results weren’t developed or understood until considerably later.
I am familiar with the work of Seth Lloyd (and that of Wei Dai) on the usefulness of black holes in computing. The singularity in black holes is a different issue than this usefulness.
I read something here recently with a good analogy for this. If someone thinks a whale is a fish, then fishiness is a quality that they would ascribe to whales, but it is not part of their definition of a whale, so they would stop saying that whales were fish if presented with conflicting evidence. Similarly, we have two issues here, technological singularities and mathematical singularities. It turns out that the latter might be useful for the purposes of the former, but it is not part of the definition of the former. I do not know what purpose you are bringing this up for. I feel like we are discussing the behaviour of whales and you keep mentioning that they are mammals. It is true according to our latest science, but it seems irrelevant. In particular, you did not link the claims you made earlier about technology accelerating forever to this discussion of black holes.
You originally said
You have since said that the singularity is not literally an infinity, but a breakdown of our models. When I pressed you on this contradiction, you did not really respond, but brought in other issues about black holes and bubble universes, including many extremely speculative proposals. What is your position on this?
The former work is the particular use connected to our discussion (black hole computers). The second work (Wei Dai’s) is about black hole’s potential use as radiators/entropy dumps.
No, it is the same. The speed and efficiency limitations of computation stem from the speed of light communication barrier, and thus they scale with density (inversely with size). Moore’s law is an exponential increase in information density. If you continue to increase density (packing more information into less space) eventually it leads you to the Bekenstein Bound and a black hole, a gravitational singularity.
I believe I outlined it in previous replies—if Moore’s Law type exponential information processing density continues to increase past the barrier of molecular computing this eventually leads to (requires) space-time engineering at the level of artificial gravitational singularities (black holes). All of future physics is speculative, but many branches of current speculation in physics for ultimate technologies involve manipulating gravitational singularities. Some possibilities include the creation of new bubble universes which would allow the overall pattern to replicate and continue inside the new universes—a form of multiversal replication—the developmental singularity idea.
Yes, this is all speculation, as is any theory of physical eschatology (such as the theory that we will eventually colonize the galaxy). The original start of all of this was the observation that colonizing the galaxy would amount to an extremely slow rate of growth compared to the historical trend. Growth at the historical pace will require (or predicts) something more radical such as space-time engineering/universal replication.
The ability to dumping entropy is essential to computing, so Wei Dai’s work is relevant. Limits on entropy dumping provide limits on computation.
It is possible that we will gain technology that allows us to vastly increase our computing power beyond what is currently known to be possible in principle, but these speculations are only a subset of possible futures. The universe has to be a certain way, and there is no reason to prefer these hypotheses to any others.
The prior probability of unknown physics that lets Moore’s law continue is therefore low.
If we observe a trend but we can explain the trend and the explanation point to a specific time where the trend breaks down, then a hypothesis that invokes some effect to make the trend continues does no better a job of explaining our observations then a hypothesis that results in a prediction that the trend will stop.
The odds ratio is therefore about 1:1. This trend gives little evidence. The posterior probability of unknown physics that lets Moore’s law continue is therefore low.
Actually this is not generally true. The ability to dump entropy .. . is simply the ability to dump entropy. In the current dominant framework of irreversible deterministically programmable von neuman architectures, entropy is dumped left and right. Moore’s law for traditional computing will run into this landauer limit relatively soon—this decade or next at the latest, and it will come to a hard end.
However, many algorithms can actually use entropy. Any type of algorithm that can use about as much entropy as it produces can trivially be made fully reversible and approach asymptotic zero net energy dissipation. Monte carlo simulation is a prototypical example, and entropy has similar uses in pattern prediction from compressed knowledge in the domain of AI algorithms.
Furthermore, advanced physics simulations of the type that future upload civilizations would desire can be made trivially reversible because physics itself is reversible. Any state updates and differential equations used in physics simulation are thus reversible and need not even produce any waste entropy. This combined with the potential positive uses of any actual entropy could allow computation in general to continue to advance. The limit only applies to specific classes of computation, and fortunately the most important future domains of massive computation (general simulation and related general intelligence) are fully reversible at zero penalty.
Yes, approaching those limits will require very low temperatures and there will always be some random entropy coming in from the outside on the surface of the computer, but this surface can simply be used as a source entropy circuit.
And finally, moving from deterministic to nondeterministic statistical computation in general further eliminates potential problems with entropy.
Of course there are other limits: there is a fundamental final limit based on QM quantization and the uncertainty principle in the minimum energy required to represent a bit and compute a “bit op”.
That limit is very far away, but miniaturization limits of building any structures out of atoms places a closer soft limit in terms of the energy density that can be contained in a molecular structure. This may limit regular computing out of safe everyday materials to chemical bond energy densities, but we exceed those densities in nuclear reactors and eventually we could achieve those energy densities in computation. And again if the computation is reversible and all entropy is recycled it need not generate any heat (although the result of catastrophic failure of such a system could result in a nuclear-level accident, so this severely constrains the practical economics).
Looking farther ahead we can see that the uncertainty principle does not say that 1 quantum of energy can only use or compute 1 bit. In fact the limits are unimaginably more generous. An interaction (such as a collision) of 2 particles with N bit-states can have on the order 2^N possible output states, so the final ladder is to turn each individual particle into a complex functional mapping or small computer unto-itself. If climbing that ladder is ever practically possible (and it appears to be), it may not technically lead to infinity but it’s close enough. This is all with known physics.
You bring up some interesting points. I do not know whether minds could be made fully reversible in practice (obviously it’s possible in principle, since physics is reversible). The question, however, is not whether negentropy use can be lowered but whether it can be lowered to the point that a different resource, one which does not follow the M^2 power law, is the limiting one. If negentropy use can be lowered, what is the new limiting factor?
For example, you mentioned that many technologies require low temperatures. However, in the absence of perfect shielding against the CMB, this requires a cooling system, which is the same thing as an entropy absorber. The limiting resource in this case is still entropy.
You did not respond the my statement that the posterior probability of unknown physics that lets Moore’s law continue is low. Does this mean you agree? If not, where is the flaw in my argument?
I imagine there will always be limiting factors, but they change with knowledge and technology.
I’m fairly sure that entropy can be recycled/managed well enough that heat/entropy issues will not be end limiters. In fact you could probably take reversible computing and entropy recycling to an extreme and make a computer that actually emitted negative net heat—absorbed entropy from the environment. I’m not sure that future hyperintelligences will necessarily have any need for the cold vacuum.
In fact, ‘entropy’ comes in many different forms. Cosmic rays are particularly undesirable forms of entropy, micrometeorites more so, and then large asteroids and supernovas are just extremums on this same scale. There is always something. A planetary atmosphere and crust provides some nice free armor.
But anyway, I digress. I’m not even absolutely certain there will always be limiting factors, but I’d bet that way. I’d bet that in the long term rare materials are a limiting factor, energy cost is still a limiting factor—but mainly just in terms of energy costs of construction rather than operation, and isolation/cooling/protection is something of a limiting factor, but these may be looking at the problem in the wrong light.
Bigger limiting factors for future hyper-intelligences may be completely non-material—such as proximity to exiting knowledge/computational clusters, and ultimately—novelty (new information).
For example, you could compute a googleplex per second and still be the dumbest hyperintelligence on the block if you are stuck with only human sensory capacities and a slow, high latency connection to other hyperintelligences and knowledge sources.
I’ve thought a little more on how to assign a likelihood to known physics (bayesian evidence and a universal prior) and it led me to the inescapable conclusion that we are still a ways away from final physics. In fact, in the process I’ve been reading up more on QM and it led me to realize that whole tracts of it are .. on the wrong track.
The universal prior as applied to physics is a whole topic in of itself, but it is the best guiding principle as to what ultimate final physics will allow. Creation of baby universes is dependent on GR and a prediction of loop quantum gravity in particular, I haven’t gotten to those maths yet. A more basic first question might be something like—which is more a prior likely—analog or digital and by how much? I’m betting digital, but if analog is not ruled out by the UP it could allow for unlimited local computation in principle, as one example.
That violates the second law of thermodynamics unless you discover an infinite heat sink, which requires a specific type of new physics.
This all depends on what is being limited by these factors, which is your values. If you value sentient life, you need computing power. If you value novelty and learning, you also need computing power, but there might be diminishing returns (of course, it is not inconsistent to value sentience with diminishing returns, though most humans who do are inconstant).
I’m skeptical of this. Can you show your work? I’m particularly doubtful of your opinions on QM, unless they’re based on some interesting point about induction, in which case I’m only as doubtful of that as I am of the rest of this paragraph.
No, the only thing baby universes and LQG have in common is that Lee Smolin studies them. He hypothesized baby universes not based on LQG, but because they allow a form of natural selection that has a chance of predicting life-filled universes without having to think about anthropic considerations. This seems like a horribly confused reason. The theory has no evidence in its favour, so it probability is not higher than its prior. In fact, according to Smolin’s Wikipedia page, it has been falsified by a discovery that the mass of the strange quark is not tuned for optimal black hole production.
If a prior prohibits an analog universe, than it is a suboptimal prior.