Fair enough. If you don’t have the time/desire/ability to look at the alignment problem arguments in detail, going by “so far, all doomsday predictions turned out false” is a good, cheap, first-glance heuristic. Of course, if you eventually manage to get into the specifics of AGI alignment, you should discard that heuristic and instead let the (more direct) evidence guide your judgement.
Talking about predictions, there’s been an AI winter a few decades ago, when most predictions of rapid AI progress turned out completely wrong. But recently, it’s the opposite trend that dominates: it’s the predictions that downplay the progress of the capabilities of AI that turn out wrong. What does your model say you should conclude about that?
Fair enough. If you don’t have the time/desire/ability to look at the alignment problem arguments in detail, going by “so far, all doomsday predictions turned out false” is a good, cheap, first-glance heuristic. Of course, if you eventually manage to get into the specifics of AGI alignment, you should discard that heuristic and instead let the (more direct) evidence guide your judgement.
Talking about predictions, there’s been an AI winter a few decades ago, when most predictions of rapid AI progress turned out completely wrong. But recently, it’s the opposite trend that dominates: it’s the predictions that downplay the progress of the capabilities of AI that turn out wrong. What does your model say you should conclude about that?