In the picture you just drew, the ideal being is derived from a series of better beings, thus it is (trivially) easier to imagine a better being than to imagine an ideal being.
I see it differently: The ideal being maximizes all good qualities, whereas imperfect beings have differing levels of the various good qualities. Thus to compare a non-ideal being to an ideal being, we only need to recognize how the ideal being does better than the non-ideal being in each good quality. But to compare two non-ideal beings, we need to evaluate trade-offs between their various attributes (unless one is strictly greater than the other)
Thinking about it more, I am not happy with either of the above models. One question that arises is: Does the same reasoning extend to other cases as well? i.e. are we better off thinking about incremental improvements than about the ideal society? Are we better off thinking about incremental improvements than about the ideal chess algorithm?
I think in some cases maybe we are, but in some cases we aren’t—ideals are useful sometimes. I’d go farther to say that some aspects of many ideals must be arrived at by iterating, but other aspects can be concluded more directly. An uninteresting conclusion, but one that supports my overall point: I wasn’t claiming that I knew everything about the ideal FAI, just that I had justified high confidence in some things.
In the picture you just drew, the ideal being is derived from a series of better beings, thus it is (trivially) easier to imagine a better being than to imagine an ideal being.
I see it differently: The ideal being maximizes all good qualities, whereas imperfect beings have differing levels of the various good qualities. Thus to compare a non-ideal being to an ideal being, we only need to recognize how the ideal being does better than the non-ideal being in each good quality. But to compare two non-ideal beings, we need to evaluate trade-offs between their various attributes (unless one is strictly greater than the other)
Thinking about it more, I am not happy with either of the above models. One question that arises is: Does the same reasoning extend to other cases as well? i.e. are we better off thinking about incremental improvements than about the ideal society? Are we better off thinking about incremental improvements than about the ideal chess algorithm?
I think in some cases maybe we are, but in some cases we aren’t—ideals are useful sometimes. I’d go farther to say that some aspects of many ideals must be arrived at by iterating, but other aspects can be concluded more directly. An uninteresting conclusion, but one that supports my overall point: I wasn’t claiming that I knew everything about the ideal FAI, just that I had justified high confidence in some things.