That position may make sense, but I think you’ll have to make more of a case for it. Currently, it’s standard in decision theory to speak of irrational preferences, such as preferences that can’t be represented as expected utility maximization, or preferences that aren’t time consistent.
Agreed. My excuse is that I (and a few other people, I’m not sure who originated the convention) consistently use “preference” to refer to that-deep-down-mathematical-structure determined by humans/humanity that completely describes what a meta-FAI needs to know in order to do things the best way possible.
Agreed. My excuse is that I (and a few other people, I’m not sure who originated the convention) consistently use “preference” to refer to that-deep-down-mathematical-structure determined by humans/humanity that completely describes what a meta-FAI needs to know in order to do things the best way possible.