Thanks for the comment, and glad it helped you. :)
outside vs. inside view—I’ve thought about this before but hadn’t read this clear a description of the differences and tradeoffs before (still catching up on Eliezer’s old writings)
My inner Daniel Kokotajlo is very emphatically pointing to that post about all the misuses of the term “outside view”. Actually, Daniel commented on my draft that he definitely didn’t thought that Hanson was using the real outside view AKA reference class forecasting in the FOOM debate, and that as Yudkowsky points out, reference class forecasting just doesn’t seem to work for AGI prediction and alignment.
I just hope all his smack talking doesn’t turn off/away talented people coming to lend a hand on alignment. I expect a lot of people on this (AF) forum found it like me after reading all Open Phil and 80,000 Hours’ convincing writing about the urgency of solving the AI alignment problem. It seems silly to have those orgs working hard to recruit people to help out, only to have them come over here and find one of the leading thinkers in the community going on frequent tirades about how much EAs suck, even though he doesn’t know most of us. Not to mention folks like Paul and Richard who have been taking his heat directly in these marathon discussions!
Yeah, I definitely think there are and will be bad consequences. My point is not that I think this is a good idea, just that I understand better where Yudkowsky is coming from, and can empathize more with his frustration.
I feel the most dangerous aspect of the smack talking is that it makes people not want to listen to him, and just see him as a smack talker with nothing to add. That was my reaction when reading the first discussions, and I had to explicitly notice that my brain was going from “This guy is annoying me so much” to “He’s wrong”, which is basically status-fueled “deduction”. So I went looking for more. But I completely understand the people, especially those who are doing a lot of work in alignment, being just “I’m not going to stop my valuable work to try to understand someone who’s just calling me a fool and is unable to voice their arguments in a way I understand.”
Thanks for the comment, and glad it helped you. :)
My inner Daniel Kokotajlo is very emphatically pointing to that post about all the misuses of the term “outside view”. Actually, Daniel commented on my draft that he definitely didn’t thought that Hanson was using the real outside view AKA reference class forecasting in the FOOM debate, and that as Yudkowsky points out, reference class forecasting just doesn’t seem to work for AGI prediction and alignment.
Yeah, I definitely think there are and will be bad consequences. My point is not that I think this is a good idea, just that I understand better where Yudkowsky is coming from, and can empathize more with his frustration.
I feel the most dangerous aspect of the smack talking is that it makes people not want to listen to him, and just see him as a smack talker with nothing to add. That was my reaction when reading the first discussions, and I had to explicitly notice that my brain was going from “This guy is annoying me so much” to “He’s wrong”, which is basically status-fueled “deduction”. So I went looking for more. But I completely understand the people, especially those who are doing a lot of work in alignment, being just “I’m not going to stop my valuable work to try to understand someone who’s just calling me a fool and is unable to voice their arguments in a way I understand.”