No. 2 might be better thought of as “What my talk is optimized for.”
I care much more about the fact that “my conscious thoughts are optimized for X” than “my talk is optimized for X,” though I agree that it might be easier to figure out what our talk is optimized for.
if you want to make the two results more consistent, you want to move your talk closer to action
I’m not very interested in consistency per se. If we just changed my conscious thoughts to be in line with my type 1 preferences, that seems like it would be a terrible deal for my type 2 preferences.
As with bets, or other more concrete actions.
Sometimes bets can work and I make many more bets than most people, but quantitatively speaking I am skeptical of how much they can do (how large they have to be, on what range of topics they are realistic, what are the other attendant costs). Using conservative epistemic norms seems like it can accomplish much more.
If we want to tie social benefit to accuracy, it seems like it would be much more promising to use “the eventual output of conservative epistemic norms” as our gold standard rather than “what eventually happens,” rather than reality, because it is available (a) much sooner, (b) with lower variance, and (c) on a much larger range of topics.
(An obvious problem with that is that it gives people larger motives to manipulate the output of the epistemic process. If you think people already have such incentives then it’s not clear this is so bad.)
I meant to claim that in fact your conscious thoughts are largely optimized for good impact on the things you say.
You can of course bet on eventual outcome of conservative epistemic norms, just as you can bet on what actually happens. Not sure what else you can do to create incentives now to believe what conservative norms will eventually say.
I care much more about the fact that “my conscious thoughts are optimized for X” than “my talk is optimized for X,” though I agree that it might be easier to figure out what our talk is optimized for.
I’m not very interested in consistency per se. If we just changed my conscious thoughts to be in line with my type 1 preferences, that seems like it would be a terrible deal for my type 2 preferences.
Sometimes bets can work and I make many more bets than most people, but quantitatively speaking I am skeptical of how much they can do (how large they have to be, on what range of topics they are realistic, what are the other attendant costs). Using conservative epistemic norms seems like it can accomplish much more.
If we want to tie social benefit to accuracy, it seems like it would be much more promising to use “the eventual output of conservative epistemic norms” as our gold standard rather than “what eventually happens,” rather than reality, because it is available (a) much sooner, (b) with lower variance, and (c) on a much larger range of topics.
(An obvious problem with that is that it gives people larger motives to manipulate the output of the epistemic process. If you think people already have such incentives then it’s not clear this is so bad.)
I meant to claim that in fact your conscious thoughts are largely optimized for good impact on the things you say.
You can of course bet on eventual outcome of conservative epistemic norms, just as you can bet on what actually happens. Not sure what else you can do to create incentives now to believe what conservative norms will eventually say.