When we say ‘rationality’, we mean instrumental rationality; getting what you want. Elsewhere, we also refer to epistemic rationality, which is believing true things. In neither case do we say anything about what you should want.
Dave might not be explaining his own position as clearly as one might wish, but I think the core of his objection is that Jane is not being epistemically rational when she decides to eat other sentient beings. This is because she is acting on a preference rooted in a belief which doesn’t adequately represent certain aspects of the world—specifically, the subjective points of view of sentient members of non-human animal species.
(BTW, I believe this subthread, including Eliezer’s own helpful comments below, provides some evidence that Eliezer’s policy of penalizing replies to comments below a certain karma threshold is misguided.
EDIT: David’s comment above had −6 karma when I originally posted this reply. His current score is no longer below the threshold.)
Yes, although our conception of epistemic and instrumental rationality is certainly likely to influence our ethics, I was making a point about epistemic and instrumental rationality. Thus imagine if we lived in a era where utopian technology delivers a version of ubiquitous naturalised telepathy, so to speak. Granted such knowledge, for an agent to act in accordance with a weaker rather than stronger preference would be epistemically and instrumentally irrational. Of course, we don’t (yet) live in a era of such radical transparency. But why should our current incomplete knowledge / ignorance make it instrumentally rational to fail to take into consideration what one recognises, intellectually at least, as the stronger preference? In this instance, the desire not to have one’s throat slit is a very strong preference indeed.
[“Replies to downvoted comments are discouraged. Pay 5 Karma points to proceed anyway?” says this Reply button. How bizarre. Is this invitation to groupthink epistemically rational? Or is killing cows good karma?]
Dave might not be explaining his own position as clearly as one might wish, but I think the core of his objection is that Jane is not being epistemically rational when she decides to eat other sentient beings. This is because she is acting on a preference rooted in a belief which doesn’t adequately represent certain aspects of the world—specifically, the subjective points of view of sentient members of non-human animal species.
(BTW, I believe this subthread, including Eliezer’s own helpful comments below, provides some evidence that Eliezer’s policy of penalizing replies to comments below a certain karma threshold is misguided.
EDIT: David’s comment above had −6 karma when I originally posted this reply. His current score is no longer below the threshold.)
Yes, although our conception of epistemic and instrumental rationality is certainly likely to influence our ethics, I was making a point about epistemic and instrumental rationality. Thus imagine if we lived in a era where utopian technology delivers a version of ubiquitous naturalised telepathy, so to speak. Granted such knowledge, for an agent to act in accordance with a weaker rather than stronger preference would be epistemically and instrumentally irrational. Of course, we don’t (yet) live in a era of such radical transparency. But why should our current incomplete knowledge / ignorance make it instrumentally rational to fail to take into consideration what one recognises, intellectually at least, as the stronger preference? In this instance, the desire not to have one’s throat slit is a very strong preference indeed.
[“Replies to downvoted comments are discouraged. Pay 5 Karma points to proceed anyway?” says this Reply button. How bizarre. Is this invitation to groupthink epistemically rational? Or is killing cows good karma?]