Not neccesarily. An agent that values X and doesn’t have a stupid prior will invariably strive towards finding the best way to accomplish X. If X requires information about an ouside world, it will build epistemology and sensors, if it requires planning, it will build manipulators and a way of evaluating hypotheticals for X-ness.
All for want of X. It will be rational because it helps attaining X.
Omega offers you two boxes, each box contains a statement, upon choosing a box you will instantly belive that statement: One contains somthing true which you currently belive to be false, tailored to cause maximum disutility in your preferred ethical system; the other contains something false which you currently belive to be true, tailored to cause maximum utility.
Truth with negative consequences or Falsehood with positive ones? If you value nothing over truth you will realise something terrible upon opening the first box, that will maybe make you kill your family. If you value something other than truth, you will end up believing that the programming code you are writing will make pie, when it will in fact make a FAI.
Do you mean this as a general principle, along the lines of “If I am constructed so as to operate a particular way, it follows that I value operating that way”? Or as something specific about rationality?
If the former, I disagree, but if the latter I’m interested in what you have in mind.
Rational agents all need to value rationality.
Not neccesarily. An agent that values X and doesn’t have a stupid prior will invariably strive towards finding the best way to accomplish X. If X requires information about an ouside world, it will build epistemology and sensors, if it requires planning, it will build manipulators and a way of evaluating hypotheticals for X-ness.
All for want of X. It will be rational because it helps attaining X.
Good epistemological rationality requires avoidance of bias, contradiction, arbitrariness, etc. That is just what my rationality-based ethics needs.
I will defer to the problem of
Truth with negative consequences or Falsehood with positive ones? If you value nothing over truth you will realise something terrible upon opening the first box, that will maybe make you kill your family. If you value something other than truth, you will end up believing that the programming code you are writing will make pie, when it will in fact make a FAI.
Do you mean this as a general principle, along the lines of “If I am constructed so as to operate a particular way, it follows that I value operating that way”? Or as something specific about rationality?
If the former, I disagree, but if the latter I’m interested in what you have in mind.
Only instrumentally.
Epistemic rationality has instrumental value. That’s where the trouble starts.