I didn’t say they were. I said that just because the speaker for a particular idea comes across as crazy doesn’t mean the idea itself is crazy. That applies whether all of Eliezer’s “crazy statements” are about AI, or whether none of them are.
Whoever knowingly chooses to save one life, when they could have saved two – to say nothing of a thousand lives, or a world – they have damned themselves as thoroughly as any murderer.
The most extreme presumptuousness about morality; insufferable moralism.
Funny, I actually agree with the top phrase. It’s written in an unfortunately preachy, minister-scaring-the-congregation-by-saying-they’ll-go-to-Hell style, which is guaranteed to make just about anyone get defensive and/or go “ick!” But if you accept the (very common) moral standard that if you can save a life, it’s better to do it than not to do it, then the logic is inevitable that if you have the choice of saving one lives or two lives, by your own metric it’s morally preferable to save two lives. If you don’t accept the moral standard that it’s better to save one life than zero lives, then that phrase should be just as insufferable.
Science is built around the assumption that you’re too stupid and self-deceiving to just use Solomonoff induction. After all, if it was that simple, we wouldn’t need a social process of science right?
I decided to be charitable, and went and looked up the post that this was in: it’s here. As far as I can tell, Eliezer doesn’t say anything that could be interpreted as “science exists because people are stupid, and I’m not stupid, therefore I don’t need science”. He claims that scientific procedures compensates for people being unwilling to let go of their pet theories and change their minds, and although I have no idea if this goal was in the minds of the people who came up with the scientific method, it doesn’t seem to be false that it accomplishes this goal.
Newton definitely wrote down his version of scientific method to explain why people shouldn’t take his law of gravity and just add, “because of Aristotelian causes,” or “because of Cartesian mechanisms.”
I didn’t say they were. I said that just because the speaker for a particular idea comes across as crazy doesn’t mean the idea itself is crazy. That applies whether all of Eliezer’s “crazy statements” are about AI, or whether none of them are.
Funny, I actually agree with the top phrase. It’s written in an unfortunately preachy, minister-scaring-the-congregation-by-saying-they’ll-go-to-Hell style, which is guaranteed to make just about anyone get defensive and/or go “ick!” But if you accept the (very common) moral standard that if you can save a life, it’s better to do it than not to do it, then the logic is inevitable that if you have the choice of saving one lives or two lives, by your own metric it’s morally preferable to save two lives. If you don’t accept the moral standard that it’s better to save one life than zero lives, then that phrase should be just as insufferable.
I decided to be charitable, and went and looked up the post that this was in: it’s here. As far as I can tell, Eliezer doesn’t say anything that could be interpreted as “science exists because people are stupid, and I’m not stupid, therefore I don’t need science”. He claims that scientific procedures compensates for people being unwilling to let go of their pet theories and change their minds, and although I have no idea if this goal was in the minds of the people who came up with the scientific method, it doesn’t seem to be false that it accomplishes this goal.
Newton definitely wrote down his version of scientific method to explain why people shouldn’t take his law of gravity and just add, “because of Aristotelian causes,” or “because of Cartesian mechanisms.”