A quantitative analysis of the sort you seek is really not possible for the specifics of future technological development. If we knew exactly what obstacles stood in the way, we’d be all but there. Hence the reliance instead on antipredictions and disjunctions, which leave a lot of uncertainty but can still point strongly in one direction.
My own reasoning behind an “AI in the next few decades” position is that, even if every other approach people have thought of and will think of bogs down, there’s always the ability to simulate a human brain, and the only obstacles there are scanning technology and computing power. In those domains, it’s rather less controversial to predict further advances (well within the theoretical limits).
Any form of cognitive enhancement (even just uploaded brains running faster than embodied brains, not to mention increasing memory or cognitive abilities) makes AI development easier and easier, and could enter a runaway state on its own.
Secondly, please don’t cite Tim Tyler as a source if you’re going to hold SIAI responsible for the argument. He’s a technophile who counts himself a fellow-traveler, but he definitely doesn’t speak for them on such issues.
Secondly, please don’t cite Tim Tyler as a source if you’re going to hold SIAI responsible for the argument. He’s a technophile who counts himself a fellow-traveler, but he definitely doesn’t speak for them on such issues.
I was not citing Tim Tyler as a source for SIAI’s views, I was addressing his argument as one of many in favor of short term focus on AI.
Is there something that you would suggest that I do to make this more clear in the top level post?
On the issue of AI timelines:
A quantitative analysis of the sort you seek is really not possible for the specifics of future technological development. If we knew exactly what obstacles stood in the way, we’d be all but there. Hence the reliance instead on antipredictions and disjunctions, which leave a lot of uncertainty but can still point strongly in one direction.
My own reasoning behind an “AI in the next few decades” position is that, even if every other approach people have thought of and will think of bogs down, there’s always the ability to simulate a human brain, and the only obstacles there are scanning technology and computing power. In those domains, it’s rather less controversial to predict further advances (well within the theoretical limits).
Any form of cognitive enhancement (even just uploaded brains running faster than embodied brains, not to mention increasing memory or cognitive abilities) makes AI development easier and easier, and could enter a runaway state on its own.
Secondly, please don’t cite Tim Tyler as a source if you’re going to hold SIAI responsible for the argument. He’s a technophile who counts himself a fellow-traveler, but he definitely doesn’t speak for them on such issues.
Surely the poster wasn’t doing that!
I was not citing Tim Tyler as a source for SIAI’s views, I was addressing his argument as one of many in favor of short term focus on AI.
Is there something that you would suggest that I do to make this more clear in the top level post?