Although one should presumably be glad that he is giving the info to let you appropriately weigh his claims. I am also reminded of the AGI timelines survey at a past AGI conference, which peaked sharply in the next few decades (the careers of the AI researchers being surveyed) and then fell rapidly. Other conversations with the folk in question make it look like that survey in part reflects people saying “obviously, my approach has a good chance of success, but if I can’t do it then no one can.” Or, alternatively:
It takes some decades to develop a technique to fruition.
I assume that only techniques I am currently aware of will ever exist.
Therefore, in a few decades when current techniques have been developed and shown to succeed or fail either we will have AI or we will not get it for a very long time if ever.
I suspect that these factors lead folk specifically working on AGI to overweight near-term AGI probability and underweight longer-term AGI prospects.
In my experience there’s a positive correlation where the more someone looks into the trends of the AGI literature, the sooner they think it will be, even in cases where they hope it’s a long ways away. Naively, I don’t get the impression that the bias you pointed out is strongly affecting e.g. Legg or Schmidhuber. I got the impression that your distribution has a median later than most AGI folk including those at SIAI (as far as I can tell; I may be wrong about the views of some SIAI people.). Are you very familiar with the AGI literature, or do you believe your naive outside view beats their inside view plus outside view corrections (insofar as anyone knows how to do such corrections)? You’ve put way more thought into Singularity scenarios than most anyone else. To what extent do you think folk like me should update on your beliefs?
Although one should presumably be glad that he is giving the info to let you appropriately weigh his claims. I am also reminded of the AGI timelines survey at a past AGI conference, which peaked sharply in the next few decades (the careers of the AI researchers being surveyed) and then fell rapidly. Other conversations with the folk in question make it look like that survey in part reflects people saying “obviously, my approach has a good chance of success, but if I can’t do it then no one can.” Or, alternatively:
It takes some decades to develop a technique to fruition.
I assume that only techniques I am currently aware of will ever exist.
Therefore, in a few decades when current techniques have been developed and shown to succeed or fail either we will have AI or we will not get it for a very long time if ever.
I suspect that these factors lead folk specifically working on AGI to overweight near-term AGI probability and underweight longer-term AGI prospects.
In my experience there’s a positive correlation where the more someone looks into the trends of the AGI literature, the sooner they think it will be, even in cases where they hope it’s a long ways away. Naively, I don’t get the impression that the bias you pointed out is strongly affecting e.g. Legg or Schmidhuber. I got the impression that your distribution has a median later than most AGI folk including those at SIAI (as far as I can tell; I may be wrong about the views of some SIAI people.). Are you very familiar with the AGI literature, or do you believe your naive outside view beats their inside view plus outside view corrections (insofar as anyone knows how to do such corrections)? You’ve put way more thought into Singularity scenarios than most anyone else. To what extent do you think folk like me should update on your beliefs?