Yes, I vehemently dispute this idea that a goal can’t be more or less [Probable to achieve higher expected utility for other agents than (any other possible goals)]
Yes, I vehemently dispute this idea that a goal can’t be more or less [Probable to achieve higher expected utility according to goal.Parent().utilityFunction].
Yes, I vehemently dispute this idea that a goal can’t be more or less [Kolmogorov-complex].
Yes, I vehemently dispute this idea that a goal can’t be more or less [optimal towards achieving your values].
Yes, I vehemently dispute this idea that a goal can’t be more or less [easy to describe as the ratio of two natural numbers].
Yes, I vehemently dispute this idea that a goal can’t be more or less [correlated in conceptspace to the values in the agent’s utility function].
Yes, I vehemently dispute this idea that a [proposed utility function] can’t be more or less rational.
Yes, I vehemently dispute this idea that a [set of predetermined criteria for building a utility function] can’t be more or less rational.
Care to enlighten me exactly on just what it is you’re disputing, and on just what points should be discussed?
Let’s play rationalist Taboo!
Care to enlighten me exactly on just what it is you’re disputing, and on just what points should be discussed?
Edit: Fixed markdown issue, sorry!