I think a useful meaning of “incomparable” is “you should think a very long time before deciding between these”
Indeed, sometimes whether or not two options are incomparable depends on how much computational power your brain is ready to spend calculating and comparing the differences. Things that are incomparable might become comparable if you think about them more. However, when one is faced with the need to decide between the two options, one has to use heuristics. For example, in his book “Predictably irrational” Dan Ariely writes:
But there’s
one aspect of relativity that consistently trips us up. It’s this:
we not only tend to compare things with one another but
also tend to focus on comparing things that are easily
comparable—and avoid comparing things that cannot be
compared easily.
That may be a confusing thought, so let me give you an
example. Suppose you’re shopping for a house in a new town.
Your real estate agent guides you to three houses, all of which
interest you. One of them is a contemporary, and two are colonials.
All three cost about the same; they are all equally desirable;
and the only difference is that one of the colonials (the
“decoy”) needs a new roof and the owner has knocked a few
thousand dollars off the price to cover the additional expense.
So which one will you choose?
The chances are good that you will not choose the contemporary
and you will not choose the colonial that needs
the new roof, but you will choose the other colonial. Why?
Here’s the rationale (which is actually quite irrational). We
like to make decisions based on comparisons. In the case of
the three houses, we don’t know much about the contemporary
(we don’t have another house to compare it with), so
that house goes on the sidelines. But we do know that one of
the colonials is better than the other one. That is, the colonial
with the good roof is better than the one with the bad
roof. Therefore, we will reason that it is better overall and go
for the colonial with the good roof, spurning the contemporary
and the colonial that needs a new roof.
[...]
Here’s another example of the decoy effect. Suppose you
are planning a honeymoon in Europe. You’ve already decided
to go to one of the major romantic cities and have narrowed
your choices to Rome and Paris, your two favorites. The
travel agent presents you with the vacation packages for each
city, which includes airfare, hotel accommodations, sightseeing
tours, and a free breakfast every morning. Which would
you select?
For most people, the decision between a week in Rome
and a week in Paris is not effortless. Rome has the Coliseum;
Paris, the Louvre. Both have a romantic ambience, fabulous
food, and fashionable shopping. It’s not an easy call. But suppose
you were offered a third option: Rome without the free
breakfast, called -Rome or the decoy.
If you were to consider these three options (Paris, Rome,
-Rome), you would immediately recognize that whereas
Rome with the free breakfast is about as appealing as Paris
with the free breakfast, the inferior option, which is Rome
without the free breakfast, is a step down. The comparison
between the clearly inferior option (-Rome) makes Rome
with the free breakfast seem even better. In fact, -Rome
makes Rome with the free breakfast look so good that you
judge it to be even better than the diffkult-to-compare option,
Paris with the free breakfast.
So, it seems that one possible heuristic is to try to match your options against yet more alternatives and the option that wins more (and loses less) matches is “declared a winner”. As you can see, the result that is obtained using this particular heuristic depends on what kind of alternatives the initial options are compared against. Therefore this heuristic is probably not good enough to reveal which option is “truly better” unless, perhaps, the choice of alternatives is somehow “balanced” (in some sense, I am not sure how to define it exactly).
It seems to me, that in many case if one employs more and more (and better) heuristics one can (maybe after quite a lot of time spent deliberating the choice) approach finding out which option is “truly better”. However, the edge case is also interesting. As you can see, the decision is not made instantly, it might take a lot of time. What if your preferences are less stable in a given period of time than your computational power allows you to calculate during that period of time? Can two options be said to be equal if your own brain does not have enough computational power to consistently distinguish between them seemingly even in principle, even if more powerful brain could make such decision (given the same level of instability of preferences)? What about creatures that have very little computational power? Furthermore, aren’t preferences themselves usually defined in terms of decision making? At the moment I am a bit confused about this.
Indeed, sometimes whether or not two options are incomparable depends on how much computational power your brain is ready to spend calculating and comparing the differences. Things that are incomparable might become comparable if you think about them more. However, when one is faced with the need to decide between the two options, one has to use heuristics. For example, in his book “Predictably irrational” Dan Ariely writes:
So, it seems that one possible heuristic is to try to match your options against yet more alternatives and the option that wins more (and loses less) matches is “declared a winner”. As you can see, the result that is obtained using this particular heuristic depends on what kind of alternatives the initial options are compared against. Therefore this heuristic is probably not good enough to reveal which option is “truly better” unless, perhaps, the choice of alternatives is somehow “balanced” (in some sense, I am not sure how to define it exactly).
It seems to me, that in many case if one employs more and more (and better) heuristics one can (maybe after quite a lot of time spent deliberating the choice) approach finding out which option is “truly better”. However, the edge case is also interesting. As you can see, the decision is not made instantly, it might take a lot of time. What if your preferences are less stable in a given period of time than your computational power allows you to calculate during that period of time? Can two options be said to be equal if your own brain does not have enough computational power to consistently distinguish between them seemingly even in principle, even if more powerful brain could make such decision (given the same level of instability of preferences)? What about creatures that have very little computational power? Furthermore, aren’t preferences themselves usually defined in terms of decision making? At the moment I am a bit confused about this.