Background and questions
Since Eric Drexler publicly released his “Comprehensive AI services model” (CAIS) there has been a series of analyses on LW, from rohinmshah, ricraz, PeterMcCluskey, and others.
Much of this discussion focuses on the implications of this model for safety strategy and resource allocation. In this question I want to focus on the empirical part of the model.
What are the boldest predictions the CAIS model makes about what the world will look in <=10 years?
“Boldest” might be interpreted as those predictions which CAIS gives a decent chance, but which have the lowest probability under other “worldviews” such as the Bostrom/Yudkowsky paradigm.
A prediction which all these worldviews agree on, but which is nonetheless quite bold, is less interesting for present purposes (possibly something like that we will see faster progress than places like mainstream academia expect).
Some other related questions:
If you disagree with Drexler, but expect there to be empirical evidence within the next 1-10 years that would change your mind, what is it?
If you expect there to be events in that timeframe causing you to go “I told you so, the world sure doesn’t look like CAIS”, what are they?
Clarifications and suggestions
I should clarify that answers can be about things that would change your mind about whether CAIS is safer than other approaches (see e.g. the Wei_Dai comment linked below).
But I suggest avoiding discussion of cruxes which are more theoretical than empirical (e.g. how decomposable high-level tasks are) unless you have a neat operationalisation for making them empirical (e.g. whether there will be evidence of large economies-of-scope of the most profitable automation services).
Also, it might be really hard to get this down to a single prediction, so it might be useful to pose a cluster of predictions and different operationalisations, and/or using conditional predictions.
One clear difference between Drexler’s worldview and MIRI’s is that Drexler expects progress to continue along the path that recent ML research has outlined, whereas MIRI sees more need for fundamental insights.
So I’ll guess that Drexler would predict maybe a 15% chance that AI research will shift away from deep learning and reinforcement learning within a decade, whereas MIRI might say something more like 25%.
I’ll guess that MIRI would also predict a higher chance of an AI winter than Drexler would, at least for some definition of winter that focused more on diminishing IQ-like returns to investment, than on overall spending.
Wei_Dai writes:
Ricraz writes:
I’d be interested in operationalising this further, and hearing takes on how many years “significantly before” entails.
He also adds:
The operationalisation which feels most natural to me is something like:
Make a list of cognitively difficult jobs (lawyer, doctor, speechwriter, CEO, engineer, scientist, accountant, trader, consultant, venture capitalist, etc...)
A job is automatable when there exists a publicly accessible AI service which allows an equally skilled person to do just as well in less than 25% of the time that it used to take a specialist, OR which allows someone with little skill or training to do the job in about the same time that it used to take a specialist.
I claim that over 75% of the jobs on this list will be automatable within 75% of the time until a single superhuman AGI is developed.
(Note that there are three free parameters in this definition, which I’ve set to arbitrary numbers that seem intuitively reasonable).
Why are you measuring it in proportion of time-until-agent-AGI and not years? If it takes 2 years from comprehensive services to agent, and most jobs are automatable within 1.5 years, that seems a lot less striking and important than the claim pre-operationalisation.
The 75% figure is from now until single agent AGI. I measure it proportionately because otherwise it says more about timeline estimates than about CAIS.
If research into general-purpose systems stops producing impressive progress, and the application of ML in specialised domains becomes more profitable, we’d soon see much more investment in AI labs that are explicitly application-focused rather than basic-research focused.