Eliezer Yudkowsky is evil. He trains rationalists and involves them into FAI and Xrisk for some hidden egoistic goal, other than saving the world and making people happy. Most people would not want him reach that goal, if they knew what it is. There is a grand masterplan. Money we’re giving to CFAR and MIRI aren’t going into AI research as much as into that masterplan. You should study rationality via means different from LW, OB and everything nearby, or nor study it at all. You shouldn’t donate money when EY wants you to. ~5%, maybe?
Eliezer Yudkowsky is evil. He trains rationalists and involves them into FAI and Xrisk for some hidden egoistic goal, other than saving the world and making people happy. Most people would not want him reach that goal, if they knew what it is. There is a grand masterplan. Money we’re giving to CFAR and MIRI aren’t going into AI research as much as into that masterplan. You should study rationality via means different from LW, OB and everything nearby, or nor study it at all. You shouldn’t donate money when EY wants you to. ~5%, maybe?