Informally, it’s the kind of intelligence (usually understood as something like ” the capacity to achieve goals in a wide variety of environments”) which is capable of doing that which is instrumental to achieving the goal. Given a goal, it is the capacity to achieve that goal, to do what is instrumental to achieving that goal.
Bostrom, in Superintelligence (2014), speaks of it as “means-end reasoning”.
So, strictly speaking, it does not involve reasoning about the ends or goals in service of which the intelligence/optimisation is being pressed.
Example: a chess-playing system will have some pre-defined goal and optimise instrumentally toward that end, but will not evaluate the goal itself.
I didn’t read your full paper yet, but from your summary, it’s unclear to me how such understanding of intelligence would be inconsistent with the “Singularity” claim
Instrumental superintelligence seems to be feasible—a system that is better at achieving a goal than the most intelligent human
Such system can also self-modify, to better achieve its goal, leading to an intelligence explosion
We suggest that such instrumental intelligence would be very limited.
In fact, there is a degree of generality here and it seems one needs a fairly high degree to get to XRisk, but that high degree would then exclude orthogonality.
Informally, it’s the kind of intelligence (usually understood as something like ” the capacity to achieve goals in a wide variety of environments”) which is capable of doing that which is instrumental to achieving the goal. Given a goal, it is the capacity to achieve that goal, to do what is instrumental to achieving that goal.
Bostrom, in Superintelligence (2014), speaks of it as “means-end reasoning”.
So, strictly speaking, it does not involve reasoning about the ends or goals in service of which the intelligence/optimisation is being pressed.
Example: a chess-playing system will have some pre-defined goal and optimise instrumentally toward that end, but will not evaluate the goal itself.
I didn’t read your full paper yet, but from your summary, it’s unclear to me how such understanding of intelligence would be inconsistent with the “Singularity” claim
Instrumental superintelligence seems to be feasible—a system that is better at achieving a goal than the most intelligent human
Such system can also self-modify, to better achieve its goal, leading to an intelligence explosion
We suggest that such instrumental intelligence would be very limited.
In fact, there is a degree of generality here and it seems one needs a fairly high degree to get to XRisk, but that high degree would then exclude orthogonality.
It’s not the inability to change its goals that makes it less powerful, it’s the inability self-improve.