“Where do you get your priors, when you start modeling yourself seriously instead of doing it by halfhearted intuition?”
Is the thought that here that you should try to find out what your priors ought to be by figuring out how you work psychologically? That seems odd. If you think some priors are special, why think they’d be the ones that you’d have in particular? If you think no priors are special, why bother trying to use your own?
Apart from the intrinsic interest of self-knowledge, I would have thought the point of making your psychological life more luminous would be to avoid known biases and misfiring heuristics that have bad consequences (epistemic and otherwise). Of particular interest here would be information about your current credences (which probably don’t form a probability function, though they may approximate one in good cases) and your updating method (which probably isn’t a form of conditionalization, though it may approximate one in good cases). Knowing your credences and updating method could help you know when you’re subject to biases and misfiring heuristics.
But note that this other goal does seem to rely on “intuition” (scare quotes because no one knows that intuitions are) at some level (though I don’t know if its half-hearted). You’ll have to come to the table with some assumptions about what would be irrational so that you can compare your judgments to that standard. It seems doubtful to me that such assumptions could be vindicated purely empirically, since the questions seem thoroughly normative. It seems doubtful, for instance, that you could justify these assumptions about what would make some pieces of reasoning rational by appeal to psychological facts about yourself or anyone else.
“Where do you get your priors, when you start modeling yourself seriously instead of doing it by halfhearted intuition?”
Is the thought that here that you should try to find out what your priors ought to be by figuring out how you work psychologically? That seems odd. If you think some priors are special, why think they’d be the ones that you’d have in particular? If you think no priors are special, why bother trying to use your own?
Apart from the intrinsic interest of self-knowledge, I would have thought the point of making your psychological life more luminous would be to avoid known biases and misfiring heuristics that have bad consequences (epistemic and otherwise). Of particular interest here would be information about your current credences (which probably don’t form a probability function, though they may approximate one in good cases) and your updating method (which probably isn’t a form of conditionalization, though it may approximate one in good cases). Knowing your credences and updating method could help you know when you’re subject to biases and misfiring heuristics.
But note that this other goal does seem to rely on “intuition” (scare quotes because no one knows that intuitions are) at some level (though I don’t know if its half-hearted). You’ll have to come to the table with some assumptions about what would be irrational so that you can compare your judgments to that standard. It seems doubtful to me that such assumptions could be vindicated purely empirically, since the questions seem thoroughly normative. It seems doubtful, for instance, that you could justify these assumptions about what would make some pieces of reasoning rational by appeal to psychological facts about yourself or anyone else.