outside vs. inside view—I’ve thought about this before but hadn’t read this clear a description of the differences and tradeoffs before (still catching up on Eliezer’s old writings)
“deep knowledge is far better at saying what won’t work than at precisely predicting the correct hypothesis.”—very useful takeaway
You might not like his tone in the recent discussions, but if someone has been saying the same thing for 13 years, nobody seems to get it, and their model predicts that this will lead to the end of the world, maybe they can get some slack for talking smack.
Good point and we should. Eliezer is a valuable source of ideas and experience around alignment, and it seems like he’s contributed immensely to this whole enterprise.
I just hope all his smack talking doesn’t turn off/away talented people coming to lend a hand on alignment. I expect a lot of people on this (AF) forum found it like me after reading all Open Phil and 80,000 Hours’ convincing writing about the urgency of solving the AI alignment problem. It seems silly to have those orgs working hard to recruit people to help out, only to have them come over here and find one of the leading thinkers in the community going on frequent tirades about how much EAs suck, even though he doesn’t know most of us. Not to mention folks like Paul and Richard who have been taking his heat directly in these marathon discussions!
Thanks for the comment, and glad it helped you. :)
outside vs. inside view—I’ve thought about this before but hadn’t read this clear a description of the differences and tradeoffs before (still catching up on Eliezer’s old writings)
My inner Daniel Kokotajlo is very emphatically pointing to that post about all the misuses of the term “outside view”. Actually, Daniel commented on my draft that he definitely didn’t thought that Hanson was using the real outside view AKA reference class forecasting in the FOOM debate, and that as Yudkowsky points out, reference class forecasting just doesn’t seem to work for AGI prediction and alignment.
I just hope all his smack talking doesn’t turn off/away talented people coming to lend a hand on alignment. I expect a lot of people on this (AF) forum found it like me after reading all Open Phil and 80,000 Hours’ convincing writing about the urgency of solving the AI alignment problem. It seems silly to have those orgs working hard to recruit people to help out, only to have them come over here and find one of the leading thinkers in the community going on frequent tirades about how much EAs suck, even though he doesn’t know most of us. Not to mention folks like Paul and Richard who have been taking his heat directly in these marathon discussions!
Yeah, I definitely think there are and will be bad consequences. My point is not that I think this is a good idea, just that I understand better where Yudkowsky is coming from, and can empathize more with his frustration.
I feel the most dangerous aspect of the smack talking is that it makes people not want to listen to him, and just see him as a smack talker with nothing to add. That was my reaction when reading the first discussions, and I had to explicitly notice that my brain was going from “This guy is annoying me so much” to “He’s wrong”, which is basically status-fueled “deduction”. So I went looking for more. But I completely understand the people, especially those who are doing a lot of work in alignment, being just “I’m not going to stop my valuable work to try to understand someone who’s just calling me a fool and is unable to voice their arguments in a way I understand.”
Great investigation/clarification of this recurring idea from the ongoing Late 2021 MIRI Conversations.
outside vs. inside view—I’ve thought about this before but hadn’t read this clear a description of the differences and tradeoffs before (still catching up on Eliezer’s old writings)
“deep knowledge is far better at saying what won’t work than at precisely predicting the correct hypothesis.”—very useful takeaway
Good point and we should. Eliezer is a valuable source of ideas and experience around alignment, and it seems like he’s contributed immensely to this whole enterprise.
I just hope all his smack talking doesn’t turn off/away talented people coming to lend a hand on alignment. I expect a lot of people on this (AF) forum found it like me after reading all Open Phil and 80,000 Hours’ convincing writing about the urgency of solving the AI alignment problem. It seems silly to have those orgs working hard to recruit people to help out, only to have them come over here and find one of the leading thinkers in the community going on frequent tirades about how much EAs suck, even though he doesn’t know most of us. Not to mention folks like Paul and Richard who have been taking his heat directly in these marathon discussions!
Thanks for the comment, and glad it helped you. :)
My inner Daniel Kokotajlo is very emphatically pointing to that post about all the misuses of the term “outside view”. Actually, Daniel commented on my draft that he definitely didn’t thought that Hanson was using the real outside view AKA reference class forecasting in the FOOM debate, and that as Yudkowsky points out, reference class forecasting just doesn’t seem to work for AGI prediction and alignment.
Yeah, I definitely think there are and will be bad consequences. My point is not that I think this is a good idea, just that I understand better where Yudkowsky is coming from, and can empathize more with his frustration.
I feel the most dangerous aspect of the smack talking is that it makes people not want to listen to him, and just see him as a smack talker with nothing to add. That was my reaction when reading the first discussions, and I had to explicitly notice that my brain was going from “This guy is annoying me so much” to “He’s wrong”, which is basically status-fueled “deduction”. So I went looking for more. But I completely understand the people, especially those who are doing a lot of work in alignment, being just “I’m not going to stop my valuable work to try to understand someone who’s just calling me a fool and is unable to voice their arguments in a way I understand.”