None of this argues that creating grey goo is an unlikely outcome, just that it’s a hard problem. And we have an existence proof of at least one example of a way to make gray goo that covers a planet, which is life-as-we-know-it, which did exactly that.
But solving hard problems is a thing that happens, and unlike the speed of light, this limit isn’t fundamental. It’s more like the “proofs” that heavier than air flight is impossible which existed in the 1800s, or the current “proofs” that LLMs won’t become AGIs—convincing until the counterexample exists, but not at all indicative that no counterexample does or could exist.
I use “nanobots” to mean “self-replicating microscopic machines with some fundamental mechanistic differences from all biological life that make them superior”.
(And I believe they’re using “grey goo” the same way.) So I think you’re using a different definition of “grey goo” from OP, and that under OP’s definition, biological life is not an existence proof.
I think the question of “whether grey-goo-as-defined-by-OP is possible” is an interesting question and I’d be curious to know the answer for various reasons, even if it’s not super-central in the context of AI risk.
He excludes the only examples we have, which is fine for his purposes, though I’m skeptical it’s useful as a definition, especially since “some difference” is an unclear and easily moved bar. However, it doesn’t change the way we want to do prediction about whether something different is possible. That is, even if the example is excluded, it is very relevant for the question “is something in the class possible to specify.”
None of this argues that creating grey goo is an unlikely outcome, just that it’s a hard problem. And we have an existence proof of at least one example of a way to make gray goo that covers a planet, which is life-as-we-know-it, which did exactly that.
But solving hard problems is a thing that happens, and unlike the speed of light, this limit isn’t fundamental. It’s more like the “proofs” that heavier than air flight is impossible which existed in the 1800s, or the current “proofs” that LLMs won’t become AGIs—convincing until the counterexample exists, but not at all indicative that no counterexample does or could exist.
OP said:
(And I believe they’re using “grey goo” the same way.) So I think you’re using a different definition of “grey goo” from OP, and that under OP’s definition, biological life is not an existence proof.
I think the question of “whether grey-goo-as-defined-by-OP is possible” is an interesting question and I’d be curious to know the answer for various reasons, even if it’s not super-central in the context of AI risk.
He excludes the only examples we have, which is fine for his purposes, though I’m skeptical it’s useful as a definition, especially since “some difference” is an unclear and easily moved bar. However, it doesn’t change the way we want to do prediction about whether something different is possible. That is, even if the example is excluded, it is very relevant for the question “is something in the class possible to specify.”