To have rigorous discussion, one thing we need is clear models of the thing that we are talking about (ie, for computability, we can talk about Turing machines, or specific models of quantum computers). The level of discussion in Superintelligence still isn’t at the level where the mental models are fully specified, which might be where disagreement in this discussion is coming from. I think for my mental model I’m using something like the classic tree search based chess playing AI, but with a bunch of unspecified optimizations that let it do useful search in large space of possible actions (and the ability to reason about and modify it’s own source code). But it’s hard to be sure that I’m not sneaking in some anthropomorphism into my model, which in this case is likely to lead one quickly astray.
To have rigorous discussion, one thing we need is clear models of the thing that we are talking about (ie, for computability, we can talk about Turing machines, or specific models of quantum computers). The level of discussion in Superintelligence still isn’t at the level where the mental models are fully specified, which might be where disagreement in this discussion is coming from. I think for my mental model I’m using something like the classic tree search based chess playing AI, but with a bunch of unspecified optimizations that let it do useful search in large space of possible actions (and the ability to reason about and modify it’s own source code). But it’s hard to be sure that I’m not sneaking in some anthropomorphism into my model, which in this case is likely to lead one quickly astray.