I worry that rationality, to the extent it must value subjective considerations, tends to minimize the importance of those considerations to yield a more clear inquiry.
sure. note that i don’t offer this as conclusive or correct, but just something i’m thinking about. also, lets assume rational choice theory is universally applicable for decision making.
rational choice theory gives you an equation to use and all we have to do is fill that equation with the proper inputs, value them correctly, and you get an answer. Obviously this is more difficult in practice, particularly where inputs (as to be expected) are not easily convertible to probabilities/numbers—I’m worried this is actually more problematic than we think. Once we have an objective equation as a tool, we may be biased to assume objectivity and truth regarding our answers, even though that belief often is based on the strength of the starting equation and not on our ability to accurately value and include the appropriate subjective factors. To the extent answering a question becomes difficult, we manufacture “certainty” by ignoring subjectivity or assuming it is not as relevant as it is.
Simply put, the belief we have a good and objective starting point biases us to believe we also can/will/actually derive an objectively correct answer, affecting the accuracy with which we fill in the equation.
you suggested that emotion hacking is more of an issue for instrumental rationality and not so much for epistemic rationality. to the extent that is wrong, you’re ignoring emotion hacking (subjective factor) from your application of epistemic rationality.
Can you clarify what you mean by this?
sure. note that i don’t offer this as conclusive or correct, but just something i’m thinking about. also, lets assume rational choice theory is universally applicable for decision making.
rational choice theory gives you an equation to use and all we have to do is fill that equation with the proper inputs, value them correctly, and you get an answer. Obviously this is more difficult in practice, particularly where inputs (as to be expected) are not easily convertible to probabilities/numbers—I’m worried this is actually more problematic than we think. Once we have an objective equation as a tool, we may be biased to assume objectivity and truth regarding our answers, even though that belief often is based on the strength of the starting equation and not on our ability to accurately value and include the appropriate subjective factors. To the extent answering a question becomes difficult, we manufacture “certainty” by ignoring subjectivity or assuming it is not as relevant as it is.
Simply put, the belief we have a good and objective starting point biases us to believe we also can/will/actually derive an objectively correct answer, affecting the accuracy with which we fill in the equation.
I agree that this is problematic but don’t see what it has to do with what I’ve been saying.
you suggested that emotion hacking is more of an issue for instrumental rationality and not so much for epistemic rationality. to the extent that is wrong, you’re ignoring emotion hacking (subjective factor) from your application of epistemic rationality.
I’m happy to agree that emotion hacking is important to epistemic rationality.
ok, wasn’t trying to play “gotcha,” just answering your question. good chat, thanks for engaging with me.