preferences being arational, the problem is that they are not being rationally (effectively) implemented/followed, not that they are somehow “not rational” themselves
That position may make sense, but I think you’ll have to make more of a case for it. Currently, it’s standard in decision theory to speak of irrational preferences, such as preferences that can’t be represented as expected utility maximization, or preferences that aren’t time consistent.
But I take your point about “rationalize”, and I’ve edited the article to remove the usages. Thanks.
That position may make sense, but I think you’ll have to make more of a case for it. Currently, it’s standard in decision theory to speak of irrational preferences, such as preferences that can’t be represented as expected utility maximization, or preferences that aren’t time consistent.
Agreed. My excuse is that I (and a few other people, I’m not sure who originated the convention) consistently use “preference” to refer to that-deep-down-mathematical-structure determined by humans/humanity that completely describes what a meta-FAI needs to know in order to do things the best way possible.
That position may make sense, but I think you’ll have to make more of a case for it. Currently, it’s standard in decision theory to speak of irrational preferences, such as preferences that can’t be represented as expected utility maximization, or preferences that aren’t time consistent.
But I take your point about “rationalize”, and I’ve edited the article to remove the usages. Thanks.
Agreed. My excuse is that I (and a few other people, I’m not sure who originated the convention) consistently use “preference” to refer to that-deep-down-mathematical-structure determined by humans/humanity that completely describes what a meta-FAI needs to know in order to do things the best way possible.