The interview that’s linked with the name is excellent, though. In an AI context (“as far as I [the AI guy] am concerned”), the quote makes more sense.
The interview that’s linked with the name is excellent, though. In an AI context, the quote makes more sense.
I’d upvote a link to the article if it were posted in an open thread. I downvote it (and all equally irrational ‘rationalist quotes’) when they are presented as such here.
Yea I sometimes struggle with that: Taken at face value, the quote is of course trivially wrong. However, it can be steelmanned in a few interesting ways. Then again, so can a great many random quotes. If, say, EY posted that quote, people may upvote after thinking of a steelmanned version. Whereas with someone else, fewer readers will bother, and downvote since at a first approximation the statement is wrong. What do, I wonder?
(Example: “If you meet the Buddha on the road, kill him!”—Well downvoted, because killing is wrong! Or upvoted, because e.g. even “you may hold no sacred beliefs” isn’t sacred? Let’s find out.)
The interview that’s linked with the name is excellent, though. In an AI context (“as far as I [the AI guy] am concerned”), the quote makes more sense.
I’d upvote a link to the article if it were posted in an open thread. I downvote it (and all equally irrational ‘rationalist quotes’) when they are presented as such here.
Yea I sometimes struggle with that: Taken at face value, the quote is of course trivially wrong. However, it can be steelmanned in a few interesting ways. Then again, so can a great many random quotes. If, say, EY posted that quote, people may upvote after thinking of a steelmanned version. Whereas with someone else, fewer readers will bother, and downvote since at a first approximation the statement is wrong. What do, I wonder?
(Example: “If you meet the Buddha on the road, kill him!”—Well downvoted, because killing is wrong! Or upvoted, because e.g. even “you may hold no sacred beliefs” isn’t sacred? Let’s find out.)