I feel much the same about this post as I did about Roko’s Final Post.
So from searching around, it looks like Roko was cosmically censored or something on this site. I don’t know if thats supposed to be a warning (if you keep up this train of thought, you too will be censored), or just an observation—but again I wasn’t here so I don’t know much of anything about Roko or his posts.
In the present day, when robot probes have been to most of the planets and we know them as beautiful but uninhabited landscapes, it may be hard to enter into the mindset of earlier centuries. Earthbound minds, knowing only the one planet, and seeing it to be inhabited, naturally thought of other worlds as inhabited too
we have sent robot probes to only a handful of locations in our solar system, a far cry from “most of the planets” unless you think the rest of the galaxy is a facade. (and yeah I realize you probably meant the solar system, but still). And the jury is still out on Mars—it may have had simple life on the past. We don’t have enough observational data yet. Also, there may be life on europa or titan. I’m not holding my breath, but its worth mentioning.
Beware the hindsight bias. When we had limited observational data, it was very reasonable given what we knew then to suppose that other worlds were similar to our own. If you seriously want to argue that the principle of anthropomorphic uniqueness (that earth is a rare unique gem in every statistical measure) vs the principle of mediocrity—the evidence for the latter is quite strong.
Without more observational data, we simply do not know the prior probability for life. But lacking detailed data, we should assume we are a random sample from some unknown distribution.
We used to think we were in the center of the galaxy, but we are within the 95% interval middle, we used to think our system is unique to have planets, we now know that our system is typical in this sense (planets are typical), our system is not especially older or younger, etc etc etc. By all measures we can currently measure based on data we have now, our system is average.
So you can say that life arises to civilization on average on only one system in a trillion, but atm it is extremely difficult to make any serious case for that, and the limited evidence strongly suggests otherwise. Based on our knowledge of our solar system, we see life arising on 1 body out of a few dozen, with the possibility of that being 2 or 3 out of a few dozen (mars, europa, titan still have some small probability).
But I would say that the shocking knowledge specific to our own time, that supplied the canvas on which a cosmology like this can be painted, is the realization that the matter of the universe could be used technologically, on a cosmic scale.
Actually no, I do not find the cosmic scale computer scenarios of Stross, Moravec et al to be realistic. Actually I find them to be about as realistic as our descendants dismantling the universe to build Babbage’s Difference Engines or giant steam clocks. But that analogy isn’t very telling.
If you look at what physics tells you about the fundamentals of computation, you can derive surprisingly powerful invariant predictions about future evolution with knowledge of just a few simple principles:
maximum data storage capacity is proportional to mass
maximum computational throughput is proportional to energy. With quantum computing, this also scales (for probabilistic algorithms) exponentially with the mass: vaguely O(e*2^m). This is of course, insane, but apparently a fact of nature (if quantum computing actually works).
maximum efficiency (in multiple senses: algorithmic, intelligence—ability to make effective use of data, transmission overhead) is inversely proportional to size (radius, volume) - this is a direct consequence of the speed of light
So armed with this knowledge, you can determine apriori that future computational hyperintelligences are highly unlikely to ever get to planetary size. They will be small, possibly even collapsing into singularities or exotic matter in final form. They will necessarily have to get smaller to become more efficient and more intelligent. This isn’t something one has a choice about: big is slow and dumb, small is fast and smart.
Very roughly, I expect that a full-blown runaway Singularity on earth may end up capturing a big chunk of the available solar energy (although perhaps less than the biosphere captures, as fusion or more exotic potentials exist), but would only ever end up needing a small fraction of earth’s mass: probably less than humans currently use. And from thermodynamics, we know maximum efficiency is reached operating in the range of earth’s ambient temperature, and that would be something of a speed constraint.
It simply is the postulate that simulation does not create things.
Make no mistake, it certainly does, and this just a matter of fact—unless one wants to argue definitions.
The computer you are using right now was created first in an approximate simulation in a mammalian cortex, which was later promoted to approximate simulations in computer models, until eventually it was simulated in a very detailed near molecular/quantum level simulation, and then emulated (perfect simulation) through numerous physical prototypes.
Literally everything around you was created through simulation in some form. You can’t create anything without simulation—thought itself is a form of simulation.
Simulations of consciousness do not create consciousness, simulations of universes do not create subjectively inhabited universes.
If you are hard set against computationalism, its probably not worth my energy to get into it (I assumed it is a given), but just to show my perspective a little:
Simulations of consciousness will create consciousness when we succeed in creating AGI’s that are as intelligent as humans and are objectively indistinguishable. At the moment we don’t understand our own brain and mechanisms of intelligence in enough detail to simulate them, and we don’t yet have enough computational power to discover those mechanisms through brute evolutionary search. But that will change pretty soon.
Keep in mind that your consciousness—the essence of your intelligence—is itself is a simulation, nothing more, nothing less.
Just enumerating all programs of length n requires memory resources exponential in n;
Not at all. It requires space of only N plus whatever each program uses for runtime. You are thinking of time resources—that does scale exponentially with N. But no hyperintelligence will use pure AIXI—they will use universal hierarchical approximations (mammalian cortex already does something like this) which have fantastically better scaling. But hold that thought, because your next line of argument brings us (indirectly) to an important agreement. .
actually executing them in turn, according to the AIXI algorithm, will be even more computationally intensive. The number of operations which can be executed in our future light-cone is actually not that big, when we start looking at such exponentials of exponentials. This sort of universe isn’t even big enough to simulate all possible stars.
perfect optimal deterministic intelligence (absolute deterministic 100% future knowledge of everything) requires a computer with at least as much mass as the system you want to simulate, and it provides an exponential time brute force algorithm to find the ultimate minimal program to perfectly simulate said system. That program will essentially be the ultimate theory of physics. But you only need to find that program once, and then forever after that you can in theory simulate anything in linear time with a big enough quantum computer.
But you can only approach that ultimate, so if you want absolute 100% accurate knowledge about how a physical system will evolve, you need to make the physical system itself. We already know this and use this throughout engineering.
First we create things in approximate simulations inside our mammalian cortices, and we create and discard a vast number of potential ideas, the best of which we simulate in ever more detail in computers, until eventually we actually physically create them and test those samples.
I think this is very a strong further argument that future hyper-intelligences will not go around turning all of the universe into computronium. Not only would that be unnecessary and ineffecient, but it would destroy valuable information: they will want to preserve as much of the interesting stuff in the galaxy as possible.
But they will probably convert little chunks of dead matter here and there into hyperintelligences and use those to run countless approximate simulations (that is to say—hyperthought) of the interesting stuff they find. (such as worlds with life)
Roko was censored and publicly abused in and about one post but he deleted everything else himself. (That would have taken hours of real time unless he created some sort of automaton. I tried just browsing through my posts for the last few months and it took ages!)
Hmm, I didn’t ask whether he’d ever had a comment deleted; what I’m confident of is that the root-and-branch removal of all his work was his own doing.
So from searching around, it looks like Roko was cosmically censored or something on this site. I don’t know if thats supposed to be a warning (if you keep up this train of thought, you too will be censored), or just an observation—but again I wasn’t here so I don’t know much of anything about Roko or his posts.
we have sent robot probes to only a handful of locations in our solar system, a far cry from “most of the planets” unless you think the rest of the galaxy is a facade. (and yeah I realize you probably meant the solar system, but still). And the jury is still out on Mars—it may have had simple life on the past. We don’t have enough observational data yet. Also, there may be life on europa or titan. I’m not holding my breath, but its worth mentioning.
Beware the hindsight bias. When we had limited observational data, it was very reasonable given what we knew then to suppose that other worlds were similar to our own. If you seriously want to argue that the principle of anthropomorphic uniqueness (that earth is a rare unique gem in every statistical measure) vs the principle of mediocrity—the evidence for the latter is quite strong.
Without more observational data, we simply do not know the prior probability for life. But lacking detailed data, we should assume we are a random sample from some unknown distribution.
We used to think we were in the center of the galaxy, but we are within the 95% interval middle, we used to think our system is unique to have planets, we now know that our system is typical in this sense (planets are typical), our system is not especially older or younger, etc etc etc. By all measures we can currently measure based on data we have now, our system is average.
So you can say that life arises to civilization on average on only one system in a trillion, but atm it is extremely difficult to make any serious case for that, and the limited evidence strongly suggests otherwise. Based on our knowledge of our solar system, we see life arising on 1 body out of a few dozen, with the possibility of that being 2 or 3 out of a few dozen (mars, europa, titan still have some small probability).
Actually no, I do not find the cosmic scale computer scenarios of Stross, Moravec et al to be realistic. Actually I find them to be about as realistic as our descendants dismantling the universe to build Babbage’s Difference Engines or giant steam clocks. But that analogy isn’t very telling.
If you look at what physics tells you about the fundamentals of computation, you can derive surprisingly powerful invariant predictions about future evolution with knowledge of just a few simple principles:
maximum data storage capacity is proportional to mass
maximum computational throughput is proportional to energy. With quantum computing, this also scales (for probabilistic algorithms) exponentially with the mass: vaguely O(e*2^m). This is of course, insane, but apparently a fact of nature (if quantum computing actually works).
maximum efficiency (in multiple senses: algorithmic, intelligence—ability to make effective use of data, transmission overhead) is inversely proportional to size (radius, volume) - this is a direct consequence of the speed of light
So armed with this knowledge, you can determine apriori that future computational hyperintelligences are highly unlikely to ever get to planetary size. They will be small, possibly even collapsing into singularities or exotic matter in final form. They will necessarily have to get smaller to become more efficient and more intelligent. This isn’t something one has a choice about: big is slow and dumb, small is fast and smart.
Very roughly, I expect that a full-blown runaway Singularity on earth may end up capturing a big chunk of the available solar energy (although perhaps less than the biosphere captures, as fusion or more exotic potentials exist), but would only ever end up needing a small fraction of earth’s mass: probably less than humans currently use. And from thermodynamics, we know maximum efficiency is reached operating in the range of earth’s ambient temperature, and that would be something of a speed constraint.
Make no mistake, it certainly does, and this just a matter of fact—unless one wants to argue definitions.
The computer you are using right now was created first in an approximate simulation in a mammalian cortex, which was later promoted to approximate simulations in computer models, until eventually it was simulated in a very detailed near molecular/quantum level simulation, and then emulated (perfect simulation) through numerous physical prototypes.
Literally everything around you was created through simulation in some form. You can’t create anything without simulation—thought itself is a form of simulation.
If you are hard set against computationalism, its probably not worth my energy to get into it (I assumed it is a given), but just to show my perspective a little:
Simulations of consciousness will create consciousness when we succeed in creating AGI’s that are as intelligent as humans and are objectively indistinguishable. At the moment we don’t understand our own brain and mechanisms of intelligence in enough detail to simulate them, and we don’t yet have enough computational power to discover those mechanisms through brute evolutionary search. But that will change pretty soon.
Keep in mind that your consciousness—the essence of your intelligence—is itself is a simulation, nothing more, nothing less.
Not at all. It requires space of only N plus whatever each program uses for runtime. You are thinking of time resources—that does scale exponentially with N. But no hyperintelligence will use pure AIXI—they will use universal hierarchical approximations (mammalian cortex already does something like this) which have fantastically better scaling. But hold that thought, because your next line of argument brings us (indirectly) to an important agreement. .
perfect optimal deterministic intelligence (absolute deterministic 100% future knowledge of everything) requires a computer with at least as much mass as the system you want to simulate, and it provides an exponential time brute force algorithm to find the ultimate minimal program to perfectly simulate said system. That program will essentially be the ultimate theory of physics. But you only need to find that program once, and then forever after that you can in theory simulate anything in linear time with a big enough quantum computer.
But you can only approach that ultimate, so if you want absolute 100% accurate knowledge about how a physical system will evolve, you need to make the physical system itself. We already know this and use this throughout engineering.
First we create things in approximate simulations inside our mammalian cortices, and we create and discard a vast number of potential ideas, the best of which we simulate in ever more detail in computers, until eventually we actually physically create them and test those samples.
I think this is very a strong further argument that future hyper-intelligences will not go around turning all of the universe into computronium. Not only would that be unnecessary and ineffecient, but it would destroy valuable information: they will want to preserve as much of the interesting stuff in the galaxy as possible.
But they will probably convert little chunks of dead matter here and there into hyperintelligences and use those to run countless approximate simulations (that is to say—hyperthought) of the interesting stuff they find. (such as worlds with life)
Roko wasn’t censored, he deleted everything he’d ever posted. I’ve independently confirmed this via contact with him outside LW.
Roko was censored and publicly abused in and about one post but he deleted everything else himself. (That would have taken hours of real time unless he created some sort of automaton. I tried just browsing through my posts for the last few months and it took ages!)
Actually lots of people were censored—several of my comments were removed from the public record, for example—and others were totally deleted.
Hmm, I didn’t ask whether he’d ever had a comment deleted; what I’m confident of is that the root-and-branch removal of all his work was his own doing.
That’s what he says here.