A very effective narrow AI is an AI that can solve certain closed-ended problems very effectively, but can’t generalise.
Since agents are necessarily limited in the number of factors that they can account for in their calculations, open-ended problems are fundamentally closed-ended problems but with influxes of mixed more-or-less undetermined data that affect what solutions are viable (so we can’t easily compute how that data will affect the space of possible actions, at least initially). But there are open-ended problems that have so many possible factors that need to be accounted for (like ‘solving the economy and increasing growth’), that the space of possible actions that a general system (like a human) can conceivably take to solve one of those problems effectively IS the space of all possible actions that a narrow AI need to consider to solve the problem as effectively as a human would, at the very least.
At that point, a “narrow AI that can solve an open-ended problem” is at least as general as an average human. If the number of possible actions that it can take increases then it’s even more general than the average human.
Kinds and species are fundamentally the same thing.
If it’s fine for me to enter the discussion
It seems to me that:
A very effective narrow AI is an AI that can solve certain closed-ended problems very effectively, but can’t generalise.
Since agents are necessarily limited in the number of factors that they can account for in their calculations, open-ended problems are fundamentally closed-ended problems but with influxes of mixed more-or-less undetermined data that affect what solutions are viable (so we can’t easily compute how that data will affect the space of possible actions, at least initially). But there are open-ended problems that have so many possible factors that need to be accounted for (like ‘solving the economy and increasing growth’), that the space of possible actions that a general system (like a human) can conceivably take to solve one of those problems effectively IS the space of all possible actions that a narrow AI need to consider to solve the problem as effectively as a human would, at the very least.
At that point, a “narrow AI that can solve an open-ended problem” is at least as general as an average human. If the number of possible actions that it can take increases then it’s even more general than the average human.
Kinds and species are fundamentally the same thing.