I think these problems are much more similar than you do. Perhaps not purely equivalent, but the hard parts of each are very much the same.
Note that here’s a lot of fuzz in the definition of AGI, which could lead it to being easier, harder, or orthogonal to self-driving taxis.
a computer program that functions as a drop-in replacement for a human remote worker, except that it’s better than the best humans at every important task (that can be done via remote workers).
“a human worker”: one, some, some parts of some jobs, only script-based call-center work, with liberal escalation to humans available, or some other criterion? One could argue that a whole lot of jobs from the last milleneum have been automated away, and that doesn’t really qualify as AGI. It might be useful to instead compare on the same dimensions as self-driving cars: amount of output with minimal human operational intervention for a type of job.
But that problem aside, the thing that’s going to be difficult in both branches is the long tail. Human cognition is massive overkill for 99% of things that humans actually do. But the small amount of unexpected situations or highly-variant demands are REALLY hard to handle. The reasons humans are hired for office/knowledge/remote-able jobs is that there are undocumented, non-obvious, and changing requirements for the tools in use and the other people involved in the enterprise. The old joke that goes “why would I pay you just to push buttons?” being answered with “you don’t, you pay me to push only the right buttons at the right times” points to this very difficult-to-define capability.
This difficulty of definition applies to training, documenting, and performing the tasks, of course. But it ALSO applies to even identifying how to measure how well an agent (human or AGI) is capable of handling it. Or even noticing that it’s the crux of why a human is there.
Further, both branches have to extra hurdle of human-level performance being insufficient for success. Self-driving cars are ALREADY better than (guesses, no cite) the 75%ile human driver, in 90% of condions. But that’s not good enough to make the switch, they need to be near-perfect, including the decisions of when to break rules in order to get something done. Chatbots are ALREADY better than the median level-one customer support agent, and that’s not good enough either—the ability to decide to go beyond, or to escalate, needs to be way BETTER than human before the switch gets made.
I think these problems are much more similar than you do. Perhaps not purely equivalent, but the hard parts of each are very much the same.
Note that here’s a lot of fuzz in the definition of AGI, which could lead it to being easier, harder, or orthogonal to self-driving taxis.
“a human worker”: one, some, some parts of some jobs, only script-based call-center work, with liberal escalation to humans available, or some other criterion? One could argue that a whole lot of jobs from the last milleneum have been automated away, and that doesn’t really qualify as AGI. It might be useful to instead compare on the same dimensions as self-driving cars: amount of output with minimal human operational intervention for a type of job.
But that problem aside, the thing that’s going to be difficult in both branches is the long tail. Human cognition is massive overkill for 99% of things that humans actually do. But the small amount of unexpected situations or highly-variant demands are REALLY hard to handle. The reasons humans are hired for office/knowledge/remote-able jobs is that there are undocumented, non-obvious, and changing requirements for the tools in use and the other people involved in the enterprise. The old joke that goes “why would I pay you just to push buttons?” being answered with “you don’t, you pay me to push only the right buttons at the right times” points to this very difficult-to-define capability.
This difficulty of definition applies to training, documenting, and performing the tasks, of course. But it ALSO applies to even identifying how to measure how well an agent (human or AGI) is capable of handling it. Or even noticing that it’s the crux of why a human is there.
Further, both branches have to extra hurdle of human-level performance being insufficient for success. Self-driving cars are ALREADY better than (guesses, no cite) the 75%ile human driver, in 90% of condions. But that’s not good enough to make the switch, they need to be near-perfect, including the decisions of when to break rules in order to get something done. Chatbots are ALREADY better than the median level-one customer support agent, and that’s not good enough either—the ability to decide to go beyond, or to escalate, needs to be way BETTER than human before the switch gets made.