apart from pointing out the actual physical difficulties in doing the thing
This excludes most of the potential good arguments! If you can show that large areas of the solution space seem physically unrealizable, that’s an argument that potentially generalizes to ASI. For example, I think people can suggest good limits on how ASI could and couldn’t traverse the galaxy, and trivially rule out threats like ‘the AI crashes the moon into Earth’, because of physical argument.
To hypothesize an argument of this sort that might be persuasive, at least to people able to verify such claims: ‘Synthesis of these chemicals is not energetically feasible at these scales because these bonds take $X energy to form, but it’s only feasible to store $Y energy in available bonds. This limits you to a very narrow set of reactions which seems unable to produce the desired state. Thus larger devices are required, absent construction under an external power source.’ I think a similar argument could plausibly exist around object stickiness, though I don’t have the chemistry knowledge to give a good framing for how that might look.
There aren’t as many arguments once we exclude physical arguments. If you wanted to argue that it was plausibly physically realizable but that strong ASI wouldn’t figure it out, I suppose some in-principle argument that it involves solving a computationally intractable challenge in leu of experiment might work, though that seems hard to argue in reality.
It’s generally hard to use weaker claims to limit far ASI, because, being by definition qualitatively and quantitatively smarter than us, it can reason about things in ways that we can’t. I’m happy to think there might exist important, practically-solvable-in-principle tasks that an ASI fails to solve, but it seems implausible for me to know ahead of time which tasks those are.
This excludes most of the potential good arguments! If you can show that large areas of the solution space seem physically unrealizable, that’s an argument that potentially generalizes to ASI. For example, I think people can suggest good limits on how ASI could and couldn’t traverse the galaxy, and trivially rule out threats like ‘the AI crashes the moon into Earth’, because of physical argument.
To hypothesize an argument of this sort that might be persuasive, at least to people able to verify such claims: ‘Synthesis of these chemicals is not energetically feasible at these scales because these bonds take $X energy to form, but it’s only feasible to store $Y energy in available bonds. This limits you to a very narrow set of reactions which seems unable to produce the desired state. Thus larger devices are required, absent construction under an external power source.’ I think a similar argument could plausibly exist around object stickiness, though I don’t have the chemistry knowledge to give a good framing for how that might look.
There aren’t as many arguments once we exclude physical arguments. If you wanted to argue that it was plausibly physically realizable but that strong ASI wouldn’t figure it out, I suppose some in-principle argument that it involves solving a computationally intractable challenge in leu of experiment might work, though that seems hard to argue in reality.
It’s generally hard to use weaker claims to limit far ASI, because, being by definition qualitatively and quantitatively smarter than us, it can reason about things in ways that we can’t. I’m happy to think there might exist important, practically-solvable-in-principle tasks that an ASI fails to solve, but it seems implausible for me to know ahead of time which tasks those are.