As as semi-newbie (read the major arguments, thought about them but not followed the detailed maths) and someone who has been following the fields of AI and then alignment for 15-20 years it isn’t just EY that I now feel is clearly incorrect. For example it sure seemed to be from reading the likes of Superintelligence that a major reason for a paperclip optimizer would be that the AI would do what you say but not know what you mean. That seems pretty much impossible now - GPT4 understands what I mean better than many people, but has no ability to take over the world. More generally I feel that the alignment literature would not have benefitted from more time before GPT4, it would just have got more convinced about incorrect conclusions.
It is also believable that there are several other important further conclusions from the literature that are not correct, and we don’t know what they are yet. I used to believe that a fast take-off was inevitable, now after reading Jacob Cannell etc think it is very unlikely. EY was very good at raising awareness, but that does not mean he should somehow represent the AI safety field because of that.
On a personal note, I distinctly remember EY being negative against deep learning to my surprise at the time (that is before Alpha Go etc) because I felt it was inevitable that deep learning/neuromorphic systems would win for the entire time I studied the field. (Unfortunately I didn’t keep the reference to his comment so I don’t have proof of that to anyone else).
I have deployed GOFAI signal processing systems I have written from scratch, studied psychology etc which lead me to the conclusion that deep learning would be the way to go. GOFAI is hopelessly brittle, NN are not.
I also strongly disagree with EY about the ethics of not valuing the feelings of non-reflective mammals etc.
As as semi-newbie (read the major arguments, thought about them but not followed the detailed maths) and someone who has been following the fields of AI and then alignment for 15-20 years it isn’t just EY that I now feel is clearly incorrect. For example it sure seemed to be from reading the likes of Superintelligence that a major reason for a paperclip optimizer would be that the AI would do what you say but not know what you mean. That seems pretty much impossible now - GPT4 understands what I mean better than many people, but has no ability to take over the world. More generally I feel that the alignment literature would not have benefitted from more time before GPT4, it would just have got more convinced about incorrect conclusions.
It is also believable that there are several other important further conclusions from the literature that are not correct, and we don’t know what they are yet. I used to believe that a fast take-off was inevitable, now after reading Jacob Cannell etc think it is very unlikely. EY was very good at raising awareness, but that does not mean he should somehow represent the AI safety field because of that.
On a personal note, I distinctly remember EY being negative against deep learning to my surprise at the time (that is before Alpha Go etc) because I felt it was inevitable that deep learning/neuromorphic systems would win for the entire time I studied the field. (Unfortunately I didn’t keep the reference to his comment so I don’t have proof of that to anyone else).
I have deployed GOFAI signal processing systems I have written from scratch, studied psychology etc which lead me to the conclusion that deep learning would be the way to go. GOFAI is hopelessly brittle, NN are not.
I also strongly disagree with EY about the ethics of not valuing the feelings of non-reflective mammals etc.
The problem always was about not “knowing what you mean”, but about “caring about what you mean”.
Well that certainly wasn’t the impression I got—some texts explicitly made that clear.
Genie knows, but doesn’t care, for example.
OK do you disagree with Nora’s assessment of how Superintelligence has aged?
https://forum.effectivealtruism.org/posts/JYEAL8g7ArqGoTaX6/ai-pause-will-likely-backfire#fntcbltyk9tdq
The genie you have there seems to require a very fast takeoff to be real and overwhelmingly powerful compared to other systems.
I honestly think thay many of such opinions come from overupdates/overgeneralizations on ChatGPT.
Yeah, but that argument was wrong, too
Making one system to change another is easy, making one system changing another into aligned superintelligence is hard.
What’s that relevant to?