It’s just a thought experiment. It is improbable to ever come up. Once we have a way to create lists of all the things humans should know in order to be the best possible humans, well...
An AI system (in the 20-100 years when we can do this) could probably consume the same list. And ‘think with’ massive chunks of digital machinery that are very close to error-free, don’t need to sleep, don’t age, don’t have agendas that aren’t inherent in their utility function, and run at 4-5ghz instead of 1 khz. And learn in parallel from thousands or millions of instances of themselves. And their algorithms can be structured where they objectively “know if they know” and “know if they don’t know” what to do to achieve the desired outcome. (so they don’t act at all if they don’t know).
Anyways with all these advantages your AI software has to be almost infinitely stupid to not beat humans at any task where it is possible to score whether or not you have succeeded.
(‘scoring’ generally requires a way to simulate the task, both the actor and the environment, and give the machine many tries to find a way to get a good score. So essentially any task on earth where it is possible for a computer to accurately and mostly completely determine if the task criterion was met or not. All tasks that involve manipulating some physical object with reasonably modelable properties fall into this class)
The tasks that don’t fit are ones with humans as either the direct target or appealing to aspects only humans can perceive. Aka art, politics, teaching, counseling, cleaning tasks where the thing being cleaned a robot can’t perceive (such as bad smells), landscaping design, architecture for appearances (but not structural or electrical or plumbing), and so on.
It’s just a thought experiment. It is improbable to ever come up. Once we have a way to create lists of all the things humans should know in order to be the best possible humans, well...
An AI system (in the 20-100 years when we can do this) could probably consume the same list. And ‘think with’ massive chunks of digital machinery that are very close to error-free, don’t need to sleep, don’t age, don’t have agendas that aren’t inherent in their utility function, and run at 4-5ghz instead of 1 khz. And learn in parallel from thousands or millions of instances of themselves. And their algorithms can be structured where they objectively “know if they know” and “know if they don’t know” what to do to achieve the desired outcome. (so they don’t act at all if they don’t know).
Anyways with all these advantages your AI software has to be almost infinitely stupid to not beat humans at any task where it is possible to score whether or not you have succeeded.
(‘scoring’ generally requires a way to simulate the task, both the actor and the environment, and give the machine many tries to find a way to get a good score. So essentially any task on earth where it is possible for a computer to accurately and mostly completely determine if the task criterion was met or not. All tasks that involve manipulating some physical object with reasonably modelable properties fall into this class)
The tasks that don’t fit are ones with humans as either the direct target or appealing to aspects only humans can perceive. Aka art, politics, teaching, counseling, cleaning tasks where the thing being cleaned a robot can’t perceive (such as bad smells), landscaping design, architecture for appearances (but not structural or electrical or plumbing), and so on.