I’m broadly sympathetic to the empirical claim that we’ll develop AI services which can replace humans at most cognitively difficult jobs significantly before we develop any single superhuman AGI (one unified system that can do nearly all cognitive tasks as well as or better than any human).
I’d be interested in operationalising this further, and hearing takes on how many years “significantly before” entails.
He also adds:
One plausible mechanism is that deep learning continues to succeed on tasks where there’s lots of training data, but doesn’t learn how to reason in general ways—e.g. it could learn from court documents how to imitate lawyers well enough to replace them in most cases, without being able to understand law in the way humans do. Self-driving cars are another pertinent example. If that pattern repeats across most human professions, we might see massive societal shifts well before AI becomes dangerous in the adversarial way that’s usually discussed in the context of AI safety.
The operationalisation which feels most natural to me is something like:
Make a list of cognitively difficult jobs (lawyer, doctor, speechwriter, CEO, engineer, scientist, accountant, trader, consultant, venture capitalist, etc...)
A job is automatable when there exists a publicly accessible AI service which allows an equally skilled person to do just as well in less than 25% of the time that it used to take a specialist, OR which allows someone with little skill or training to do the job in about the same time that it used to take a specialist.
I claim that over 75% of the jobs on this list will be automatable within 75% of the time until a single superhuman AGI is developed.
(Note that there are three free parameters in this definition, which I’ve set to arbitrary numbers that seem intuitively reasonable).
Why are you measuring it in proportion of time-until-agent-AGI and not years? If it takes 2 years from comprehensive services to agent, and most jobs are automatable within 1.5 years, that seems a lot less striking and important than the claim pre-operationalisation.
The 75% figure is from now until single agent AGI. I measure it proportionately because otherwise it says more about timeline estimates than about CAIS.
Ricraz writes:
I’d be interested in operationalising this further, and hearing takes on how many years “significantly before” entails.
He also adds:
The operationalisation which feels most natural to me is something like:
Make a list of cognitively difficult jobs (lawyer, doctor, speechwriter, CEO, engineer, scientist, accountant, trader, consultant, venture capitalist, etc...)
A job is automatable when there exists a publicly accessible AI service which allows an equally skilled person to do just as well in less than 25% of the time that it used to take a specialist, OR which allows someone with little skill or training to do the job in about the same time that it used to take a specialist.
I claim that over 75% of the jobs on this list will be automatable within 75% of the time until a single superhuman AGI is developed.
(Note that there are three free parameters in this definition, which I’ve set to arbitrary numbers that seem intuitively reasonable).
Why are you measuring it in proportion of time-until-agent-AGI and not years? If it takes 2 years from comprehensive services to agent, and most jobs are automatable within 1.5 years, that seems a lot less striking and important than the claim pre-operationalisation.
The 75% figure is from now until single agent AGI. I measure it proportionately because otherwise it says more about timeline estimates than about CAIS.