[Eli’s personal notes for personal understanding. Feel free to ignore or engage.]
If a planning Oracle is going to produce better solutions than humanity has yet managed to the Rubik’s Cube, it needs to be capable of doing original computer science research and writing its own code.
Is this true? It seems like the crux of this argument.
I’m curious if you’ve read up on Eric Drexler’s more recent thoughts (see this post and this one for some reviews of his lengthier book). My sense was that it was sort of a newer take on something-like-tool-AI, written by someone who was more of an expert than Holden was in 2012.
Talking like there’s this one simple “predictive algorithm” that we can read out of the brain using neuroscience and overpower to produce better plans… doesn’t seem quite congruous with what humanity actually does to produce its predictions and plans.
Ok. I have the benefit of the intervening years, but talking about “one simple ‘predictive algorithm’” sounds fine to me.
It seems like, in humans, that there’s probably, basically one cortical algorithm, which does some kind of metalearning. And yes, in practice, doing anything complicated involves learning a bunch of more specific mental procedures (for instance, learning to do decomposition and Fermi estimates instead of just doing a gut check, when estimating large numbers), what Paul calls “the machine” in this post. But so what?
Is the concern there that we just don’t understand what kind of optimization is happening in “the machine”? Is the thought that that kind of search is likely to discover how to break out of the box because it will find clever tricks like “capture all of the computing power in the world?”
As for the notion that this AGI runs on a “human predictive algorithm” that we got off of neuroscience and then implemented using more computing power, without knowing how it works or being able to enhance it further: It took 30 years of multiple computer scientists doing basic math research, and inventing code, and running that code on a computer cluster, for them to come up with a 20-move solution to the Rubik’s Cube. If a planning Oracle is going to produce better solutions than humanity has yet managed to the Rubik’s Cube, it needs to be capable of doing original computer science research and writing its own code. You can’t get a 20-move solution out of a human brain, using the native human planning algorithm. Humanity can do it, but only by exploiting the ability of humans to explicitly comprehend the deep structure of the domain (not just rely on intuition) and then inventing an artifact, a new design, running code which uses a different and superior cognitive algorithm, to solve that Rubik’s Cube in 20 moves. We do all that without being self-modifying, but it’s still a capability to respect.
[Eli’s personal notes for personal understanding. Feel free to ignore or engage.]
Is this true? It seems like the crux of this argument.
I’m curious if you’ve read up on Eric Drexler’s more recent thoughts (see this post and this one for some reviews of his lengthier book). My sense was that it was sort of a newer take on something-like-tool-AI, written by someone who was more of an expert than Holden was in 2012.
Ok. I have the benefit of the intervening years, but talking about “one simple ‘predictive algorithm’” sounds fine to me.
It seems like, in humans, that there’s probably, basically one cortical algorithm, which does some kind of metalearning. And yes, in practice, doing anything complicated involves learning a bunch of more specific mental procedures (for instance, learning to do decomposition and Fermi estimates instead of just doing a gut check, when estimating large numbers), what Paul calls “the machine” in this post. But so what?
Is the concern there that we just don’t understand what kind of optimization is happening in “the machine”? Is the thought that that kind of search is likely to discover how to break out of the box because it will find clever tricks like “capture all of the computing power in the world?”
Why does this matter?