But of course, not all the rationalists I create will be interested in my own project—and that’s fine. You can’t capture all the value you create, and trying can have poor side effects.
I expect to be able to find at least a dozen quotes where he contradicts himself there, if I cared enough to spend that much time on looking for them. Here are just a few:
(Please read up on the context.)
I honestly don’t see how a rationalist can avoid this conclusion: At this absolutely critical hinge in the history of the universe—Earth in the 21st century—rational altruists should devote their marginal attentions to risks that threaten to terminate intelligent life or permanently destroy a part of its potential. Those problems, which Nick Bostrom named ‘existential risks’, have got all the scope. And when it comes to marginal impact, there are major risks outstanding that practically no one is working on. Once you get the stakes on a gut level it’s hard to see how doing anything else could be sane.
...
I think that if you’re actually just going to sort of confront it, rationally, full-on, then you can’t really justify trading off any part of that intergalactic civilization for any intrinsic thing that you could get nowadays...
...
…if Omega tells me that I’ve actually managed to do worse than nothing on Friendly AI, that of course has to change my opinion of how good I am at rationality or teaching others rationality,...
Given the evidence I find it hard to believe that he does not care if lesswrong members do not believe that AI risk is the most important issue today. I also don’t think that he would call someone a rationalist who has read everything that he wrote and decided not to care about AI risk.
I expect to be able to find at least a dozen quotes where he contradicts himself there, if I cared enough to spend that much time on looking for them. Here are just a few:
(Please read up on the context.)
...
...
Given the evidence I find it hard to believe that he does not care if lesswrong members do not believe that AI risk is the most important issue today. I also don’t think that he would call someone a rationalist who has read everything that he wrote and decided not to care about AI risk.
You’ve got selective quotation down to an art form. I’m a bit jealous.