This is something of a nitpick, but I think that it is more moral/ethical to hold a proposition than to hold its converse if there is good reason to think that that proposition is true. Is this un-Bayesian?
It’s a meta-level/aliasing sort of problem, I think. You don’t believe it’s more ethical/moral to believe any specific proposition, you believe it’s more ethical/moral to believe ‘the proposition most likely to be true’, which is a variable which can be filled with whatever proposition the situation suggests, so it’s a different class of thing. Effectively it’s equivalent to ‘taking apparent truth as normative’, so I’d call it the only position of that format that is Bayesian.
This website seems to have two definitions of rationality: rationality as truth-finding, and rationality as goal-achieving. Since truth deals with “is”, and morality deals with “ought”, morality will be of the latter kind. Because they are two different definitions, at some point they can be at odds—but what if your primary goal is truth-finding (which might be required by your statement if you make no exceptions for beneficial self-deception)? How would you feel about ignoring some truths, because they might lead you to miss other truths?
This article is about how learning some truths can prevent you from learning other truths, with an implication that order of learning will mitigate these effects. In some cases, you might be well served by purging truths from your mind (for example, “there is a miniscule possibility of X” will activate priming and availability heuristic). Some truths are simply much more useful than others, so what do you do if some lesser truths can get in the way of greater truths?
Neither truth-finding nor goal-achieving quite captures the usual sense of the word around here. I’d say the latter is closer to how we usually use it, in that we’re interested in fulfilling human values; but explicit, surface-level goals don’t always further deep values, and in fact can be actively counterproductive thanks to bias or partial or asymmetrical information.
Almost everyone who thinks they terminally value truth-finding is wrong; it makes a good applause light, but our minds just aren’t built that way. But since there are so many cognitive and informational obstacles in our way, finding the real truth is at some point going to be critically important to fulfilling almost any real-world set of human values.
On the other hand, I don’t rule out beneficial self-deception in some situations, either. It shouldn’t be necessary for any kind of hypothetical rationalist super-being, but there aren’t too many of those running around.
This is something of a nitpick, but I think that it is more moral/ethical to hold a proposition than to hold its converse if there is good reason to think that that proposition is true. Is this un-Bayesian?
It’s a meta-level/aliasing sort of problem, I think. You don’t believe it’s more ethical/moral to believe any specific proposition, you believe it’s more ethical/moral to believe ‘the proposition most likely to be true’, which is a variable which can be filled with whatever proposition the situation suggests, so it’s a different class of thing. Effectively it’s equivalent to ‘taking apparent truth as normative’, so I’d call it the only position of that format that is Bayesian.
This website seems to have two definitions of rationality: rationality as truth-finding, and rationality as goal-achieving. Since truth deals with “is”, and morality deals with “ought”, morality will be of the latter kind. Because they are two different definitions, at some point they can be at odds—but what if your primary goal is truth-finding (which might be required by your statement if you make no exceptions for beneficial self-deception)? How would you feel about ignoring some truths, because they might lead you to miss other truths?
This article is about how learning some truths can prevent you from learning other truths, with an implication that order of learning will mitigate these effects. In some cases, you might be well served by purging truths from your mind (for example, “there is a miniscule possibility of X” will activate priming and availability heuristic). Some truths are simply much more useful than others, so what do you do if some lesser truths can get in the way of greater truths?
Neither truth-finding nor goal-achieving quite captures the usual sense of the word around here. I’d say the latter is closer to how we usually use it, in that we’re interested in fulfilling human values; but explicit, surface-level goals don’t always further deep values, and in fact can be actively counterproductive thanks to bias or partial or asymmetrical information.
Almost everyone who thinks they terminally value truth-finding is wrong; it makes a good applause light, but our minds just aren’t built that way. But since there are so many cognitive and informational obstacles in our way, finding the real truth is at some point going to be critically important to fulfilling almost any real-world set of human values.
On the other hand, I don’t rule out beneficial self-deception in some situations, either. It shouldn’t be necessary for any kind of hypothetical rationalist super-being, but there aren’t too many of those running around.