I wasn’t talking about faster progress as such, just about a predictable single large discontinuity in our capabilities at the point in time when the em approach first bears fruit. It’s not a continual feedback, just an application of intelligence to the problem of making biological computations (including those that implement intelligence) run on simulated physics instead of the real thing.
I see. In that case, why would you expect applying intelligence to that problem to bring about a predictable discontinuity, but applying intelligence to other problems not to?
The impact on exercise of intelligence doesn’t seem to come until the ems are already discontinuously better (if I understand), so can’t seem to explain the discontinuous progress.
Making intelligence-implementing computations substrate-independent in practice (rather than just in principle) already expands our capabilities—being able to run those computations in places pink goo can’t go and at speeds pink goo can’t manage is already a huge leap.
But we do run biological computations (assuming that the exercise of human intelligence reduces to computation) to make em technology possible.
Since we’re just bouncing short comments off each other at this point, I’m going to wrap up now with a summary of my current position as clarified through this discussion. The original comment posed a puzzle:
Brain emulations seem to represent an unusual possibility for an abrupt jump in technological capability, because we would basically be ‘stealing’ the technology rather than designing it from scratch. …If this is an unusual situation however, it seems strange that the other most salient route to superintelligence—artificial intelligence designed by humans—is also often expected to involve a discontinuous jump in capability, but for entirely different reasons.
The commonality is that both routes attack a critical aspect of the manifestation of intelligence. One goes straight for an understanding of the abstract computation that implements domain-general intelligence; the other goes at the “interpreter”, physics, that realizes that abstract computation.
I wasn’t talking about faster progress as such, just about a predictable single large discontinuity in our capabilities at the point in time when the em approach first bears fruit. It’s not a continual feedback, just an application of intelligence to the problem of making biological computations (including those that implement intelligence) run on simulated physics instead of the real thing.
I see. In that case, why would you expect applying intelligence to that problem to bring about a predictable discontinuity, but applying intelligence to other problems not to?
Because the solution has an immediate impact on the exercise of intelligence, I guess? I’m a little unclear on what other problems you have in mind.
The impact on exercise of intelligence doesn’t seem to come until the ems are already discontinuously better (if I understand), so can’t seem to explain the discontinuous progress.
Making intelligence-implementing computations substrate-independent in practice (rather than just in principle) already expands our capabilities—being able to run those computations in places pink goo can’t go and at speeds pink goo can’t manage is already a huge leap.
Even if it is a huge leap to achieve that, until you run the computations, it is unclear to me how they could have contributed to that leap.
But we do run biological computations (assuming that the exercise of human intelligence reduces to computation) to make em technology possible.
Since we’re just bouncing short comments off each other at this point, I’m going to wrap up now with a summary of my current position as clarified through this discussion. The original comment posed a puzzle:
The commonality is that both routes attack a critical aspect of the manifestation of intelligence. One goes straight for an understanding of the abstract computation that implements domain-general intelligence; the other goes at the “interpreter”, physics, that realizes that abstract computation.