I didn’t read your full paper yet, but from your summary, it’s unclear to me how such understanding of intelligence would be inconsistent with the “Singularity” claim
Instrumental superintelligence seems to be feasible—a system that is better at achieving a goal than the most intelligent human
Such system can also self-modify, to better achieve its goal, leading to an intelligence explosion
We suggest that such instrumental intelligence would be very limited.
In fact, there is a degree of generality here and it seems one needs a fairly high degree to get to XRisk, but that high degree would then exclude orthogonality.
I didn’t read your full paper yet, but from your summary, it’s unclear to me how such understanding of intelligence would be inconsistent with the “Singularity” claim
Instrumental superintelligence seems to be feasible—a system that is better at achieving a goal than the most intelligent human
Such system can also self-modify, to better achieve its goal, leading to an intelligence explosion
We suggest that such instrumental intelligence would be very limited.
In fact, there is a degree of generality here and it seems one needs a fairly high degree to get to XRisk, but that high degree would then exclude orthogonality.
It’s not the inability to change its goals that makes it less powerful, it’s the inability self-improve.