“the most powerful tool is adopting epistemic norms which are appropriately conservative; to rely more on the scientific method, on well-formed arguments, on evidence that can be clearly articulated and reproduced, and so on.”
A simple summary: Believe Less. Hold higher standards for what is sufficient reason to believe. Of course this is in fact what most people actually do. They don’t bother to hold beliefs on the kind of abstract topics on which Paul wants to hold beliefs.
“1. What my decisions are optimized for. .. 2. What I consciously believe I want.”
No. 2 might be better thought of as “What my talk is optimized for.” Both systems are highly optimized. This way of seeing it emphasizes that if you want to make the two results more consistent, you want to move your talk closer to action. As with bets, or other more concrete actions.
As in, believe fewer things and believe them less strongly? By assigning lower odds to beliefs, in order to fight overconfidence? Just making sure I’m interpreting correctly.
don’t bother to hold beliefs on the kind of abstract topics
I’ve read this sentiment from you a couple times, and don’t understand the motive. Have you written about it more in depth somewhere?
I would have argued the opposite. It seems like societal acceptance is almost irrelevant as evidence of whether that world is desirable.
Yes believe fewer things and believe them less strongly.
On abstract beliefs I’m not following you. The usual motive for most people is that they don’t need most abstract beliefs to live their lives.
I’d agree with you that most abstract beliefs aren’t needed for us to simply live our lives. However, it looks like you were making a normative claim that to minimize bias, we shouldn’t deal too much with abstract beliefs when we can avoid it.
Similarly, IIRC, this is also your answer to things like discussion over whether EMs will really “be us”, and other such abstract philosophical arguments. Perhaps such discussion isn’t tractable, but to me it does still seem important for determining whether such a world is a utopia or a dystopia.
So, I would have argued the opposite: try to develop a good, solid, comprehensive set of abstract principles, and then apply them uniformly to object-level decisions. This should help us optimize for the sorts of things our talk and thoughts are optimized for, and minimize the influence of our other biases. I am my conscious mind, so why should I care much what my subconscious wants?
Here’s a bit more detail, if you are (or anyone else is) curious. If you’ve heard these sorts of arguments a hundred times before, feel free to skip and link to a counterargument.
Predicting how an unreflective society will actually react may be easier than this sort of philosophy, but social acceptance seems necessary but not sufficient here. Under my view, Oedipus Rex’s life accomplishments might still have negative utility to him, even if he lived a happy life and never learned who his mother was. Similarly, the Star Trek universe quickly turns from a utopia to a dystopia if teleportation technically counts as death, or the moral equivalent, according to human minds who know all the facts and have heard all the arguments. (Klingons may come to different conclusions, based on different values.) I’m not a vegan, but there appears to be a small chance that animals do have significant moral weight, and we’re living in a Soylent Green style dystopia.
I would argue that ignoring the tough philosophical issues now dooms us to the status quo bias in the future. To me, it seems that we’re potentially less biased about abstract principles that aren’t pressing or politically relevant at the moment. If I’ve thought about trolley problems and whatnot before, and have formed abstract beliefs about how to act in certain situations, then I should be much less biased when an issue comes up at work or a tough decision needs to be made at home, or there’s a new political event and I’m forming an opinion. More importantly, the same should be true of reasoning about the far future, or anything else.
No. 2 might be better thought of as “What my talk is optimized for.”
I care much more about the fact that “my conscious thoughts are optimized for X” than “my talk is optimized for X,” though I agree that it might be easier to figure out what our talk is optimized for.
if you want to make the two results more consistent, you want to move your talk closer to action
I’m not very interested in consistency per se. If we just changed my conscious thoughts to be in line with my type 1 preferences, that seems like it would be a terrible deal for my type 2 preferences.
As with bets, or other more concrete actions.
Sometimes bets can work and I make many more bets than most people, but quantitatively speaking I am skeptical of how much they can do (how large they have to be, on what range of topics they are realistic, what are the other attendant costs). Using conservative epistemic norms seems like it can accomplish much more.
If we want to tie social benefit to accuracy, it seems like it would be much more promising to use “the eventual output of conservative epistemic norms” as our gold standard rather than “what eventually happens,” rather than reality, because it is available (a) much sooner, (b) with lower variance, and (c) on a much larger range of topics.
(An obvious problem with that is that it gives people larger motives to manipulate the output of the epistemic process. If you think people already have such incentives then it’s not clear this is so bad.)
I meant to claim that in fact your conscious thoughts are largely optimized for good impact on the things you say.
You can of course bet on eventual outcome of conservative epistemic norms, just as you can bet on what actually happens. Not sure what else you can do to create incentives now to believe what conservative norms will eventually say.
“the most powerful tool is adopting epistemic norms which are appropriately conservative; to rely more on the scientific method, on well-formed arguments, on evidence that can be clearly articulated and reproduced, and so on.”
A simple summary: Believe Less. Hold higher standards for what is sufficient reason to believe. Of course this is in fact what most people actually do. They don’t bother to hold beliefs on the kind of abstract topics on which Paul wants to hold beliefs.
“1. What my decisions are optimized for. .. 2. What I consciously believe I want.”
No. 2 might be better thought of as “What my talk is optimized for.” Both systems are highly optimized. This way of seeing it emphasizes that if you want to make the two results more consistent, you want to move your talk closer to action. As with bets, or other more concrete actions.
As in, believe fewer things and believe them less strongly? By assigning lower odds to beliefs, in order to fight overconfidence? Just making sure I’m interpreting correctly.
I’ve read this sentiment from you a couple times, and don’t understand the motive. Have you written about it more in depth somewhere?
I would have argued the opposite. It seems like societal acceptance is almost irrelevant as evidence of whether that world is desirable.
Yes believe fewer things and believe them less strongly. On abstract beliefs I’m not following you. The usual motive for most people is that they don’t need most abstract beliefs to live their lives.
I’d agree with you that most abstract beliefs aren’t needed for us to simply live our lives. However, it looks like you were making a normative claim that to minimize bias, we shouldn’t deal too much with abstract beliefs when we can avoid it.
Similarly, IIRC, this is also your answer to things like discussion over whether EMs will really “be us”, and other such abstract philosophical arguments. Perhaps such discussion isn’t tractable, but to me it does still seem important for determining whether such a world is a utopia or a dystopia.
So, I would have argued the opposite: try to develop a good, solid, comprehensive set of abstract principles, and then apply them uniformly to object-level decisions. This should help us optimize for the sorts of things our talk and thoughts are optimized for, and minimize the influence of our other biases. I am my conscious mind, so why should I care much what my subconscious wants?
Here’s a bit more detail, if you are (or anyone else is) curious. If you’ve heard these sorts of arguments a hundred times before, feel free to skip and link to a counterargument.
Predicting how an unreflective society will actually react may be easier than this sort of philosophy, but social acceptance seems necessary but not sufficient here. Under my view, Oedipus Rex’s life accomplishments might still have negative utility to him, even if he lived a happy life and never learned who his mother was. Similarly, the Star Trek universe quickly turns from a utopia to a dystopia if teleportation technically counts as death, or the moral equivalent, according to human minds who know all the facts and have heard all the arguments. (Klingons may come to different conclusions, based on different values.) I’m not a vegan, but there appears to be a small chance that animals do have significant moral weight, and we’re living in a Soylent Green style dystopia.
I would argue that ignoring the tough philosophical issues now dooms us to the status quo bias in the future. To me, it seems that we’re potentially less biased about abstract principles that aren’t pressing or politically relevant at the moment. If I’ve thought about trolley problems and whatnot before, and have formed abstract beliefs about how to act in certain situations, then I should be much less biased when an issue comes up at work or a tough decision needs to be made at home, or there’s a new political event and I’m forming an opinion. More importantly, the same should be true of reasoning about the far future, or anything else.
I care much more about the fact that “my conscious thoughts are optimized for X” than “my talk is optimized for X,” though I agree that it might be easier to figure out what our talk is optimized for.
I’m not very interested in consistency per se. If we just changed my conscious thoughts to be in line with my type 1 preferences, that seems like it would be a terrible deal for my type 2 preferences.
Sometimes bets can work and I make many more bets than most people, but quantitatively speaking I am skeptical of how much they can do (how large they have to be, on what range of topics they are realistic, what are the other attendant costs). Using conservative epistemic norms seems like it can accomplish much more.
If we want to tie social benefit to accuracy, it seems like it would be much more promising to use “the eventual output of conservative epistemic norms” as our gold standard rather than “what eventually happens,” rather than reality, because it is available (a) much sooner, (b) with lower variance, and (c) on a much larger range of topics.
(An obvious problem with that is that it gives people larger motives to manipulate the output of the epistemic process. If you think people already have such incentives then it’s not clear this is so bad.)
I meant to claim that in fact your conscious thoughts are largely optimized for good impact on the things you say.
You can of course bet on eventual outcome of conservative epistemic norms, just as you can bet on what actually happens. Not sure what else you can do to create incentives now to believe what conservative norms will eventually say.