Many complex physical systems are still largely modelled empirically (ad-hoc models validated using experiments) rather than it being possible to derive them from first principles. While physicists sometimes claim to derive things from first principles, in practice these derivations often ignore a lot of details which still has to be justified using experiments.
The argument here seems to be “humans have not yet discovered true first-principles justifications of the practical models, therefore a superintelligence won’t be able to either”.
I agree that not being able to experiment makes things much harder, such that an AI only slightly smarter than humans won’t one-shot engineer things humans can’t iteratively engineer. And I agree that we can’t be certain it is possible to one-shot engineer nanobots with remotely feasible compute resources. But I don’t see how we can be sure what isn’t possible for a superintelligence.
The argument here seems to be “humans have not yet discovered true first-principles justifications of the practical models, therefore a superintelligence won’t be able to either”.
I agree that not being able to experiment makes things much harder, such that an AI only slightly smarter than humans won’t one-shot engineer things humans can’t iteratively engineer. And I agree that we can’t be certain it is possible to one-shot engineer nanobots with remotely feasible compute resources. But I don’t see how we can be sure what isn’t possible for a superintelligence.