The alignment problem is arguably another example, like my above response re quantum physics, of a field spilling over into philosophy, such that even a strong amateur philosopher can point things out that the AI professionals hadn’t thought through. I.e. it shows that AI alignment is an interdisciplinary topic which (I assume) went beyond existing mainstream AI.
Huh? Strong evidence for that would be us all being dead.
I want to insist that “it’s unreasonable to strongly update about technological risks until we’re all dead” is not a great heuristic for evaluating GCRs.
The latter has come to be true, in no small part as a result of his writing. This implies that there was indeed something academics were missing about alignment.
Only a minority agree with him. Any number of (contradictory!) ideas will “seem to be right” if the criterion is only that some people agree with them.
A sizable shift has occurred because of him, which is different than your interpretation of my position. If you’re convincing Stuart Russell, who is convincing Turing award winners like Yoshua Bengio and Judea Pearl, then there was something that wasn’t considered.
You should also take into account that Eliezer seems to have been right, as an “amateur” AI researcher, about AI alignment being a big deal.
The alignment problem is arguably another example, like my above response re quantum physics, of a field spilling over into philosophy, such that even a strong amateur philosopher can point things out that the AI professionals hadn’t thought through. I.e. it shows that AI alignment is an interdisciplinary topic which (I assume) went beyond existing mainstream AI.
Huh? Strong evidence for that would be us all being dead. Or did you just mean that some people in the field agree with him?
I want to insist that “it’s unreasonable to strongly update about technological risks until we’re all dead” is not a great heuristic for evaluating GCRs.
The latter has come to be true, in no small part as a result of his writing. This implies that there was indeed something academics were missing about alignment.
Only a minority agree with him. Any number of (contradictory!) ideas will “seem to be right” if the criterion is only that some people agree with them.
A sizable shift has occurred because of him, which is different than your interpretation of my position. If you’re convincing Stuart Russell, who is convincing Turing award winners like Yoshua Bengio and Judea Pearl, then there was something that wasn’t considered.