(2) can an AI use nanotech as a central ingredient of a plan to operate perpetually in a world without humans?
In the ‘magical nano exists’ universe, the AI can do this with well-behaved nanofactories.
In the ‘bio-like nano’ universe, ‘evolutionary dynamics’ (aka game theory among replicators under high brownian noise) will make ‘operate perpetually’ a shaky proposal for any entity that values its goals and identity. No-one ‘operates perpetually’ under high noise, goals and identity are constantly evolving.
So the answer to the question is likely ‘no’—you need to drop some constraints on ‘an AI’ or ‘operate perpetually’.
Before you say ‘I don’t care, we all die anyway’—maybe you don’t, but many people (myself included) do care rather a lot about who kills us and why and what they do afterwards.
ME: Imagine a world with chips similar to today’s chips, and robots similar to humans, and no other nano magic. With enough chips and enough robots, such a system could operate perpetually, right? Just as human society does.
THEM: OK sure that could happen but not until there are millions or even billions of human-level robots, because chips are very hard to fabricate, like you need to staff all these high-purity chemical factories and mines and thousands of companies manufacturing precision equipment for the fab etc.
ME: I don’t agree with “millions or even billions”, but I’ll concede that claim for the sake of argument. OK fine, let’s replace the “chips” (top-down nano) with “brains-in-vats” (self-assembling nano). The vats are in a big warehouse with robots supplying nutrients. Each brain-in-vat is grown via a carefully controlled process that starts with a genome (or genome-like thing) that is synthesized in a DNA-synthesis machine and quadruple-checked for errors. Now the infrastructure requirements are much smaller.
~~
OK, so now in this story, do you agree that evolution is not particularly relevant? Like, I guess a brain-in-a-vat might get cancer, if the AI can’t get DNA replication error rates dramatically lower than it is in humans (I imagine it could, because its tradeoffs are different), but I don’t think that’s what you were talking about. A brain-in-a-vat with cancer is not a risk to the AI itself, it could just dump the vat and start over.
(This story does require that the AI solves the alignment problem with respect to the brains-in-vats.)
If you construct a hypothetical wherein there is obviously no space for evolutionary dynamics, then yes, evolutionary dynamics are unlikely to play a big role.
The case I was thinking of (which would likely be part of the research process towards ‘brains in vats’—essentially a prerequisit) is larger and larger collectives of designed organisms, forming tissues etc.
It may be possible to design a functioning brain in a vat from the ground up with no evolution, but I imagine that
a) you would get there faster verifying hypotheses with in vitro experiments
b) by the time you got to brains-in-vats, you would be able to make lots of other, smaller scale designed organisms that could do interesting, useful things as large assemblies
And since you have to pay a high price for error correction, the group that is more willing to gamble with evolutionary dynamics will likely have MVOs ready to deploy sooner that the one that insists on stripping all the evolutionary dynamics out of their setup.
In the ‘magical nano exists’ universe, the AI can do this with well-behaved nanofactories.
In the ‘bio-like nano’ universe, ‘evolutionary dynamics’ (aka game theory among replicators under high brownian noise) will make ‘operate perpetually’ a shaky proposal for any entity that values its goals and identity. No-one ‘operates perpetually’ under high noise, goals and identity are constantly evolving.
So the answer to the question is likely ‘no’—you need to drop some constraints on ‘an AI’ or ‘operate perpetually’.
Before you say ‘I don’t care, we all die anyway’—maybe you don’t, but many people (myself included) do care rather a lot about who kills us and why and what they do afterwards.
I’m imagining an exchange like this.
ME: Imagine a world with chips similar to today’s chips, and robots similar to humans, and no other nano magic. With enough chips and enough robots, such a system could operate perpetually, right? Just as human society does.
THEM: OK sure that could happen but not until there are millions or even billions of human-level robots, because chips are very hard to fabricate, like you need to staff all these high-purity chemical factories and mines and thousands of companies manufacturing precision equipment for the fab etc.
ME: I don’t agree with “millions or even billions”, but I’ll concede that claim for the sake of argument. OK fine, let’s replace the “chips” (top-down nano) with “brains-in-vats” (self-assembling nano). The vats are in a big warehouse with robots supplying nutrients. Each brain-in-vat is grown via a carefully controlled process that starts with a genome (or genome-like thing) that is synthesized in a DNA-synthesis machine and quadruple-checked for errors. Now the infrastructure requirements are much smaller.
~~
OK, so now in this story, do you agree that evolution is not particularly relevant? Like, I guess a brain-in-a-vat might get cancer, if the AI can’t get DNA replication error rates dramatically lower than it is in humans (I imagine it could, because its tradeoffs are different), but I don’t think that’s what you were talking about. A brain-in-a-vat with cancer is not a risk to the AI itself, it could just dump the vat and start over.
(This story does require that the AI solves the alignment problem with respect to the brains-in-vats.)
If you construct a hypothetical wherein there is obviously no space for evolutionary dynamics, then yes, evolutionary dynamics are unlikely to play a big role.
The case I was thinking of (which would likely be part of the research process towards ‘brains in vats’—essentially a prerequisit) is larger and larger collectives of designed organisms, forming tissues etc.
It may be possible to design a functioning brain in a vat from the ground up with no evolution, but I imagine that
a) you would get there faster verifying hypotheses with in vitro experiments
b) by the time you got to brains-in-vats, you would be able to make lots of other, smaller scale designed organisms that could do interesting, useful things as large assemblies
And since you have to pay a high price for error correction, the group that is more willing to gamble with evolutionary dynamics will likely have MVOs ready to deploy sooner that the one that insists on stripping all the evolutionary dynamics out of their setup.