In the emulation case, how does intelligence acting on itself come into the picture? (I agree it might do after there are emulations, but I’m talking about the jump from capabilities prior to the first good emulation to those of emulations).
The materialist thesis implies that a biological computation can be split into two parts: (i) a specification of a brain-state; (ii) a set of rules for brain-state time evolution, i.e., physics. When biological computations run in base reality, brain-state maps to program state and physics is the interpreter, pushing brain-states through the abstract computation. Creating an em then becomes analogous to using Futamura’s first projection to build in the static part of the computation—physics—thereby making the resulting program substrate-independent. The entire process of creating a viable emulation strategy happens when we humans run a biological computation that (i) tells us what is necessary to create a substrate-independent brain-state spec and (ii) solves a lot of practical physics simulation problems, so that to generate an em, the brain-state spec is all we need. This is somewhat analogous to Futamura’s second projection: we take the ordered pair (biological computation, physics), run a particular biological computation on it, and get a brain-state-to-em compiler.
So intelligence is acting on itself indirectly through the fact that an “interpreter”, physics, is how reality manifests intelligence. We aim to specialize physics out of the process of running the biological computations that implement intelligence, and by necessity, we’re use a biological computation that implements intelligence to accomplish that goal.
I’m not sure I followed that correctly, but I take it you are saying that making brain emulations involves biological intelligence (the emulation makers) acting on biological intelligence (the emulations). Which is quite right, but it seems like intelligence acting on intelligence should only (as far as I know) produce faster progress if there is some kind of feedback—if the latter intelligence goes on to make more intelligence etc. Which may happen in the emulation case, but after the period in which we might expect particularly fast growth from copying technology from nature. Apologies if I misunderstand you.
I wasn’t talking about faster progress as such, just about a predictable single large discontinuity in our capabilities at the point in time when the em approach first bears fruit. It’s not a continual feedback, just an application of intelligence to the problem of making biological computations (including those that implement intelligence) run on simulated physics instead of the real thing.
I see. In that case, why would you expect applying intelligence to that problem to bring about a predictable discontinuity, but applying intelligence to other problems not to?
The impact on exercise of intelligence doesn’t seem to come until the ems are already discontinuously better (if I understand), so can’t seem to explain the discontinuous progress.
Making intelligence-implementing computations substrate-independent in practice (rather than just in principle) already expands our capabilities—being able to run those computations in places pink goo can’t go and at speeds pink goo can’t manage is already a huge leap.
But we do run biological computations (assuming that the exercise of human intelligence reduces to computation) to make em technology possible.
Since we’re just bouncing short comments off each other at this point, I’m going to wrap up now with a summary of my current position as clarified through this discussion. The original comment posed a puzzle:
Brain emulations seem to represent an unusual possibility for an abrupt jump in technological capability, because we would basically be ‘stealing’ the technology rather than designing it from scratch. …If this is an unusual situation however, it seems strange that the other most salient route to superintelligence—artificial intelligence designed by humans—is also often expected to involve a discontinuous jump in capability, but for entirely different reasons.
The commonality is that both routes attack a critical aspect of the manifestation of intelligence. One goes straight for an understanding of the abstract computation that implements domain-general intelligence; the other goes at the “interpreter”, physics, that realizes that abstract computation.
In the emulation case, how does intelligence acting on itself come into the picture? (I agree it might do after there are emulations, but I’m talking about the jump from capabilities prior to the first good emulation to those of emulations).
Hmm.. let me think…
The materialist thesis implies that a biological computation can be split into two parts: (i) a specification of a brain-state; (ii) a set of rules for brain-state time evolution, i.e., physics. When biological computations run in base reality, brain-state maps to program state and physics is the interpreter, pushing brain-states through the abstract computation. Creating an em then becomes analogous to using Futamura’s first projection to build in the static part of the computation—physics—thereby making the resulting program substrate-independent. The entire process of creating a viable emulation strategy happens when we humans run a biological computation that (i) tells us what is necessary to create a substrate-independent brain-state spec and (ii) solves a lot of practical physics simulation problems, so that to generate an em, the brain-state spec is all we need. This is somewhat analogous to Futamura’s second projection: we take the ordered pair (biological computation, physics), run a particular biological computation on it, and get a brain-state-to-em compiler.
So intelligence is acting on itself indirectly through the fact that an “interpreter”, physics, is how reality manifests intelligence. We aim to specialize physics out of the process of running the biological computations that implement intelligence, and by necessity, we’re use a biological computation that implements intelligence to accomplish that goal.
I’m not sure I followed that correctly, but I take it you are saying that making brain emulations involves biological intelligence (the emulation makers) acting on biological intelligence (the emulations). Which is quite right, but it seems like intelligence acting on intelligence should only (as far as I know) produce faster progress if there is some kind of feedback—if the latter intelligence goes on to make more intelligence etc. Which may happen in the emulation case, but after the period in which we might expect particularly fast growth from copying technology from nature. Apologies if I misunderstand you.
I wasn’t talking about faster progress as such, just about a predictable single large discontinuity in our capabilities at the point in time when the em approach first bears fruit. It’s not a continual feedback, just an application of intelligence to the problem of making biological computations (including those that implement intelligence) run on simulated physics instead of the real thing.
I see. In that case, why would you expect applying intelligence to that problem to bring about a predictable discontinuity, but applying intelligence to other problems not to?
Because the solution has an immediate impact on the exercise of intelligence, I guess? I’m a little unclear on what other problems you have in mind.
The impact on exercise of intelligence doesn’t seem to come until the ems are already discontinuously better (if I understand), so can’t seem to explain the discontinuous progress.
Making intelligence-implementing computations substrate-independent in practice (rather than just in principle) already expands our capabilities—being able to run those computations in places pink goo can’t go and at speeds pink goo can’t manage is already a huge leap.
Even if it is a huge leap to achieve that, until you run the computations, it is unclear to me how they could have contributed to that leap.
But we do run biological computations (assuming that the exercise of human intelligence reduces to computation) to make em technology possible.
Since we’re just bouncing short comments off each other at this point, I’m going to wrap up now with a summary of my current position as clarified through this discussion. The original comment posed a puzzle:
The commonality is that both routes attack a critical aspect of the manifestation of intelligence. One goes straight for an understanding of the abstract computation that implements domain-general intelligence; the other goes at the “interpreter”, physics, that realizes that abstract computation.