I’ve been uncomfortable for a while with statements like Eliezer’s remark that:
When heavier-than-air flight or atomic energy was a hundred years off, it looked fifty years off or impossible; when it was five years off, it still looked fifty years off or impossible.
This really is picking and choosing specific technological examples rather than looking at the overall pattern. In 1964, five years before the first moon landing, it looked a few years off but certainly not a hundred years off.
Perhaps the best online tool for calibration training is PredictionBook.com
I strongly agree with this. I’ve used it to make a variety of predictions, including tech predictions. One issue it does have is that there’s no easy categorization so one can’t use it for example to see at a glance whether one’s tech predictions are more or less accurate than one’s predictions about politics or other subjects.
Mathematicians seldom try and predict when major problems will be solved, because they recognise that insight is very hard to predict.
Noteworthy counterexample: Soon after the Feit-Thompson theorem, people started talking about classifying all finite simple groups, but this was because Gorenstein had a specific blueprint that was thought might be able to get the full result. But even then, the time period was shorter.
In cases like the Riemann hypothesis we have a few ideas of things that might work, but none look that promising, and results one would expect to fall first, like the Lindelof hypothesis remain apparently unassailable. So one major sign of a problem being genuinely far off is that even to our eyes, much simpler problems look far off. I’m not sure how to apply that to AI. Do modern practical successes like machine learning count plausibly as successes of related minor aspects? It will be a lot easier to tell after there’s some form of general AI and we have more of an idea about its structure. Similar issues apply to almost any future tech.
This really is picking and choosing specific technological examples rather than looking at the overall pattern. In 1964, five years before the first moon landing, it looked a few years off but certainly not a hundred years off.
I don’t think Eliezer meant to say that breakthrough technologies always seem 50 years off or impossible until they are invented. Those who were paying attention to computer chess could predict it passing the human level before the end of the millenium, and we’ve seen self-driving cars coming for a while now. Anyway, I’ve added a clarifying note below the Eliezer quote, now.
I don’t think Eliezer meant to say that breakthrough technologies always seem 50 years off or impossible until they are invented.
I don’t think JoshuaZ meant to say Eliezer meant to say that. It seems more like he just meant that the list feels cherry-picked; that the examples given seem to be chosen for their suitability to the argument rather than because they form a compelling signal when compared against other relevant data points.
I’ve been uncomfortable for a while with statements like Eliezer’s remark that:
This really is picking and choosing specific technological examples rather than looking at the overall pattern. In 1964, five years before the first moon landing, it looked a few years off but certainly not a hundred years off.
I strongly agree with this. I’ve used it to make a variety of predictions, including tech predictions. One issue it does have is that there’s no easy categorization so one can’t use it for example to see at a glance whether one’s tech predictions are more or less accurate than one’s predictions about politics or other subjects.
Noteworthy counterexample: Soon after the Feit-Thompson theorem, people started talking about classifying all finite simple groups, but this was because Gorenstein had a specific blueprint that was thought might be able to get the full result. But even then, the time period was shorter.
In cases like the Riemann hypothesis we have a few ideas of things that might work, but none look that promising, and results one would expect to fall first, like the Lindelof hypothesis remain apparently unassailable. So one major sign of a problem being genuinely far off is that even to our eyes, much simpler problems look far off. I’m not sure how to apply that to AI. Do modern practical successes like machine learning count plausibly as successes of related minor aspects? It will be a lot easier to tell after there’s some form of general AI and we have more of an idea about its structure. Similar issues apply to almost any future tech.
I don’t think Eliezer meant to say that breakthrough technologies always seem 50 years off or impossible until they are invented. Those who were paying attention to computer chess could predict it passing the human level before the end of the millenium, and we’ve seen self-driving cars coming for a while now. Anyway, I’ve added a clarifying note below the Eliezer quote, now.
I don’t think JoshuaZ meant to say Eliezer meant to say that. It seems more like he just meant that the list feels cherry-picked; that the examples given seem to be chosen for their suitability to the argument rather than because they form a compelling signal when compared against other relevant data points.