Economic history suggests big changes are plausible.
Sure, but it is hard to predict what changes are going to happen and when. In particular, major economic changes are typically precipitated by technological breakthroughs. It doesn’t seem that we can predict these breakthroughs looking at the economy, since the causal relationship is mostly the other way.
AI progress is ongoing.
Ok.
AI progress is hard to predict, but AI experts tend to expect human-level AI in mid-century.
But AI experts have a notoriously poor track record at predicting human-level AI.
Several plausible paths lead to superintelligence: brain emulations, AI, human cognitive enhancement, brain-computer interfaces, and organizations.
Organizations probably can’t become much more “superintelligent” than they already are. Human cognitive enhancement, brain-computer interfaces, etc. also have limits.
Most of these probably lead to machine superintelligence ultimately.
Only digital intelligences (brain emulations and AIs) seem to have a realistic chance of becoming significantly more intelligent than anything that exists now, and even this is dubious.
That there are several paths suggests we are likely to get there.
There aren’t really many paths, and they are not independent.
Organizations can become much more superintelligent than they are. A team of humans plus better and better weak AI has no upper limit to intelligence. Such a hybrid superintelligent organization can be the way to keep AI development under control.
The synergistic union human+AI (master+servant) is more intelligent than AI alone which will have huge deficits in several intelligence domains. Human+AI has not a single sub-human level intelligence domain. I agree that the superintelligence part originates primarily from AI capabilities. Without human initiative, creativity and capability using mighty tools these superintelligent capabilities would not come into action.
Do you think AI experts deserve their notoriety at predicting? The several public predictions that I know of prior to 1980 were indeed early (i.e. we have passed the time they predicted) but [Michie’s survey] covers about ten times as many people and suggests that in the 70s, most CS researchers thought human-level AI would not arrive by 2014.
I thought that the main result by Armstrong and Sotala was that most AI experts who made a public prediction, predicted human-level AI within 15 to 20 years in their future, regardless on when they made the prediction.
Is this new data? Can you have some reference on how it was obtained?
That was one main result, yes. It looks like Armstrong and Sotala counted the Michie survey as one ‘prediction’ (see their dataset here). They have only a small number of other early predictions, so it is easy for that to make a big difference.
The image I linked is the dataset they used, with some modifications made by Paul Christiano and I (explained at more length here along with the new dataset for download). e.g. we took out duplications, and some things which seemed to have been sampled in a biased fashion (such that only early predictions would be recorded). We took out the Michie set altogether—our graph is now of public statements, not survey data.
Sure, but it is hard to predict what changes are going to happen and when.
In particular, major economic changes are typically precipitated by technological breakthroughs. It doesn’t seem that we can predict these breakthroughs looking at the economy, since the causal relationship is mostly the other way.
Ok.
But AI experts have a notoriously poor track record at predicting human-level AI.
Organizations probably can’t become much more “superintelligent” than they already are. Human cognitive enhancement, brain-computer interfaces, etc. also have limits.
Only digital intelligences (brain emulations and AIs) seem to have a realistic chance of becoming significantly more intelligent than anything that exists now, and even this is dubious.
There aren’t really many paths, and they are not independent.
Organizations can become much more superintelligent than they are. A team of humans plus better and better weak AI has no upper limit to intelligence. Such a hybrid superintelligent organization can be the way to keep AI development under control.
In which case most of the “superintelligence” would come from the AI, not from the people.
The synergistic union human+AI (master+servant) is more intelligent than AI alone which will have huge deficits in several intelligence domains. Human+AI has not a single sub-human level intelligence domain. I agree that the superintelligence part originates primarily from AI capabilities. Without human initiative, creativity and capability using mighty tools these superintelligent capabilities would not come into action.
Do you think AI experts deserve their notoriety at predicting? The several public predictions that I know of prior to 1980 were indeed early (i.e. we have passed the time they predicted) but [Michie’s survey] covers about ten times as many people and suggests that in the 70s, most CS researchers thought human-level AI would not arrive by 2014.
I thought that the main result by Armstrong and Sotala was that most AI experts who made a public prediction, predicted human-level AI within 15 to 20 years in their future, regardless on when they made the prediction.
Is this new data? Can you have some reference on how it was obtained?
That was one main result, yes. It looks like Armstrong and Sotala counted the Michie survey as one ‘prediction’ (see their dataset here). They have only a small number of other early predictions, so it is easy for that to make a big difference.
The image I linked is the dataset they used, with some modifications made by Paul Christiano and I (explained at more length here along with the new dataset for download). e.g. we took out duplications, and some things which seemed to have been sampled in a biased fashion (such that only early predictions would be recorded). We took out the Michie set altogether—our graph is now of public statements, not survey data.