we know now that AI researchers in the 80′s and earlier were TREMENDOUSLY overoptimistic
In hindsight they were optimistic, but given the knowledge to which they had access at the time, it’s harder to make the same arguments. How would you argue that a researcher at that time should know how much the computational power constraints of that day mattered?
But I’d argue that their optimism stemmed from irrational assumptions. I’m not even saying that if I were transported back in time I would fall prey to the same irrational assumptions, but I would say that they had naive views of problems like visual object recognition or language comprehension that were completely unmotivated.
A comparable error today would be to assume that Strong AI is right around the corner as soon as we crack some current set of well-defined research problems, that there could not be any more problems that are not yet understood.
A comparable error today would be to assume that Strong AI is right around the corner as soon as we crack some current set of well-defined research problems
I don’t see at all how the step from non-self -modifying AI to self -modifying AI is in the same reference class as solving most well defined current research problems.
I think we’re arguing over whether I’m speaking from hindsight bias or whether the researchers in the past were irrationally overoptimistic (and whether EY’s assessment of how optimistic they should have been without hindsight is overoptimistic).
Let’s admit both are possible.
What could I show you that would convince you of the latter?
What could I show you that would convince you of the latter?
A valid heuristic that comes to the conclusion that you want to convince me off. In this case your claim that moving from non-self -modifying AI to self -modifying AI is no qualitative leap in the same way that solving most current well-defined AI problems is no qualitative leap suggests that you aren’t reasoning clearly.
If you get the easy things wrong, then the harder things are also more likely to be wrong.
Furthermore there a strong prior that you are wrong about estimating probabilities if you aren’t calibrated. It been shown that naive attempt to try to correct against the hindsight bias just don’t work.
Until you have at least trained calibration a bit you aren’t in a good position to judge whether other people are off.
In hindsight they were optimistic, but given the knowledge to which they had access at the time, it’s harder to make the same arguments. How would you argue that a researcher at that time should know how much the computational power constraints of that day mattered?
But I’d argue that their optimism stemmed from irrational assumptions. I’m not even saying that if I were transported back in time I would fall prey to the same irrational assumptions, but I would say that they had naive views of problems like visual object recognition or language comprehension that were completely unmotivated.
A comparable error today would be to assume that Strong AI is right around the corner as soon as we crack some current set of well-defined research problems, that there could not be any more problems that are not yet understood.
I don’t see at all how the step from non-self -modifying AI to self -modifying AI is in the same reference class as solving most well defined current research problems.
I think we’re arguing over whether I’m speaking from hindsight bias or whether the researchers in the past were irrationally overoptimistic (and whether EY’s assessment of how optimistic they should have been without hindsight is overoptimistic).
Let’s admit both are possible.
What could I show you that would convince you of the latter?
A valid heuristic that comes to the conclusion that you want to convince me off. In this case your claim that moving from non-self -modifying AI to self -modifying AI is no qualitative leap in the same way that solving most current well-defined AI problems is no qualitative leap suggests that you aren’t reasoning clearly. If you get the easy things wrong, then the harder things are also more likely to be wrong.
Furthermore there a strong prior that you are wrong about estimating probabilities if you aren’t calibrated. It been shown that naive attempt to try to correct against the hindsight bias just don’t work. Until you have at least trained calibration a bit you aren’t in a good position to judge whether other people are off.