Eh? I’m pretty fine with something proving the Riemann Hypothesis before the world ends. It came up during my recent debate with Paul, in fact.
Not so fine with something designing nanomachinery that can be built by factories built by proteins. They’re legitimately different orders of problem, and it’s no coincidence that the second one has a path to pivotal impact, and the first does not.
I understand why trusting the alignment of an AI that’s suggesting methods for mass-producing nanomachinery might be unwise, but I don’t quite understand why we wouldn’t expect to be able to produce a narrow AI that is able to do that? Specifically, if we hypothesize that GPT + more compute won’t FOOM, I’m not sure why something GPT-like would be unable to create nanomachines.
Because nanomachinery is harder than AGI. Narrow nanomachinery AI is possible in principle, but we won’t produce it before general AI, because it is expected to be a complex-enough real-world problem, that if we try to optimize for it, we will sooner get AGI with utility function inspired by nanomachinery.
Is it well-established that AGI is easier to solve than nanomachinery? Yudkowksy seems to be confident it is, but I wouldn’t have expected that to be a question we know the answer to yet. (Though my expectations could certainly be wrong.)
A fair question. I don’t think it is established, exactly, but the plausible window is quite narrow. For example, if nanomachinery were easy, we would already have that technology, no? And we seem quite near to AGI.
I understand why trusting the alignment of an AI that’s suggesting methods for mass-producing nanomachinery might be unwise, but I don’t quite understand why we wouldn’t expect to be able to produce a narrow AI that is able to do that? Specifically, if we hypothesize that GPT + more compute won’t FOOM, I’m not sure why something GPT-like would be unable to create nanomachines.
Because nanomachinery is harder than AGI. Narrow nanomachinery AI is possible in principle, but we won’t produce it before general AI, because it is expected to be a complex-enough real-world problem, that if we try to optimize for it, we will sooner get AGI with utility function inspired by nanomachinery.
Is it well-established that AGI is easier to solve than nanomachinery? Yudkowksy seems to be confident it is, but I wouldn’t have expected that to be a question we know the answer to yet. (Though my expectations could certainly be wrong.)
A fair question. I don’t think it is established, exactly, but the plausible window is quite narrow. For example, if nanomachinery were easy, we would already have that technology, no? And we seem quite near to AGI.