Because nanomachinery is harder than AGI. Narrow nanomachinery AI is possible in principle, but we won’t produce it before general AI, because it is expected to be a complex-enough real-world problem, that if we try to optimize for it, we will sooner get AGI with utility function inspired by nanomachinery.
Is it well-established that AGI is easier to solve than nanomachinery? Yudkowksy seems to be confident it is, but I wouldn’t have expected that to be a question we know the answer to yet. (Though my expectations could certainly be wrong.)
A fair question. I don’t think it is established, exactly, but the plausible window is quite narrow. For example, if nanomachinery were easy, we would already have that technology, no? And we seem quite near to AGI.
Because nanomachinery is harder than AGI. Narrow nanomachinery AI is possible in principle, but we won’t produce it before general AI, because it is expected to be a complex-enough real-world problem, that if we try to optimize for it, we will sooner get AGI with utility function inspired by nanomachinery.
Is it well-established that AGI is easier to solve than nanomachinery? Yudkowksy seems to be confident it is, but I wouldn’t have expected that to be a question we know the answer to yet. (Though my expectations could certainly be wrong.)
A fair question. I don’t think it is established, exactly, but the plausible window is quite narrow. For example, if nanomachinery were easy, we would already have that technology, no? And we seem quite near to AGI.