As in, believe fewer things and believe them less strongly? By assigning lower odds to beliefs, in order to fight overconfidence? Just making sure I’m interpreting correctly.
don’t bother to hold beliefs on the kind of abstract topics
I’ve read this sentiment from you a couple times, and don’t understand the motive. Have you written about it more in depth somewhere?
I would have argued the opposite. It seems like societal acceptance is almost irrelevant as evidence of whether that world is desirable.
Yes believe fewer things and believe them less strongly.
On abstract beliefs I’m not following you. The usual motive for most people is that they don’t need most abstract beliefs to live their lives.
I’d agree with you that most abstract beliefs aren’t needed for us to simply live our lives. However, it looks like you were making a normative claim that to minimize bias, we shouldn’t deal too much with abstract beliefs when we can avoid it.
Similarly, IIRC, this is also your answer to things like discussion over whether EMs will really “be us”, and other such abstract philosophical arguments. Perhaps such discussion isn’t tractable, but to me it does still seem important for determining whether such a world is a utopia or a dystopia.
So, I would have argued the opposite: try to develop a good, solid, comprehensive set of abstract principles, and then apply them uniformly to object-level decisions. This should help us optimize for the sorts of things our talk and thoughts are optimized for, and minimize the influence of our other biases. I am my conscious mind, so why should I care much what my subconscious wants?
Here’s a bit more detail, if you are (or anyone else is) curious. If you’ve heard these sorts of arguments a hundred times before, feel free to skip and link to a counterargument.
Predicting how an unreflective society will actually react may be easier than this sort of philosophy, but social acceptance seems necessary but not sufficient here. Under my view, Oedipus Rex’s life accomplishments might still have negative utility to him, even if he lived a happy life and never learned who his mother was. Similarly, the Star Trek universe quickly turns from a utopia to a dystopia if teleportation technically counts as death, or the moral equivalent, according to human minds who know all the facts and have heard all the arguments. (Klingons may come to different conclusions, based on different values.) I’m not a vegan, but there appears to be a small chance that animals do have significant moral weight, and we’re living in a Soylent Green style dystopia.
I would argue that ignoring the tough philosophical issues now dooms us to the status quo bias in the future. To me, it seems that we’re potentially less biased about abstract principles that aren’t pressing or politically relevant at the moment. If I’ve thought about trolley problems and whatnot before, and have formed abstract beliefs about how to act in certain situations, then I should be much less biased when an issue comes up at work or a tough decision needs to be made at home, or there’s a new political event and I’m forming an opinion. More importantly, the same should be true of reasoning about the far future, or anything else.
As in, believe fewer things and believe them less strongly? By assigning lower odds to beliefs, in order to fight overconfidence? Just making sure I’m interpreting correctly.
I’ve read this sentiment from you a couple times, and don’t understand the motive. Have you written about it more in depth somewhere?
I would have argued the opposite. It seems like societal acceptance is almost irrelevant as evidence of whether that world is desirable.
Yes believe fewer things and believe them less strongly. On abstract beliefs I’m not following you. The usual motive for most people is that they don’t need most abstract beliefs to live their lives.
I’d agree with you that most abstract beliefs aren’t needed for us to simply live our lives. However, it looks like you were making a normative claim that to minimize bias, we shouldn’t deal too much with abstract beliefs when we can avoid it.
Similarly, IIRC, this is also your answer to things like discussion over whether EMs will really “be us”, and other such abstract philosophical arguments. Perhaps such discussion isn’t tractable, but to me it does still seem important for determining whether such a world is a utopia or a dystopia.
So, I would have argued the opposite: try to develop a good, solid, comprehensive set of abstract principles, and then apply them uniformly to object-level decisions. This should help us optimize for the sorts of things our talk and thoughts are optimized for, and minimize the influence of our other biases. I am my conscious mind, so why should I care much what my subconscious wants?
Here’s a bit more detail, if you are (or anyone else is) curious. If you’ve heard these sorts of arguments a hundred times before, feel free to skip and link to a counterargument.
Predicting how an unreflective society will actually react may be easier than this sort of philosophy, but social acceptance seems necessary but not sufficient here. Under my view, Oedipus Rex’s life accomplishments might still have negative utility to him, even if he lived a happy life and never learned who his mother was. Similarly, the Star Trek universe quickly turns from a utopia to a dystopia if teleportation technically counts as death, or the moral equivalent, according to human minds who know all the facts and have heard all the arguments. (Klingons may come to different conclusions, based on different values.) I’m not a vegan, but there appears to be a small chance that animals do have significant moral weight, and we’re living in a Soylent Green style dystopia.
I would argue that ignoring the tough philosophical issues now dooms us to the status quo bias in the future. To me, it seems that we’re potentially less biased about abstract principles that aren’t pressing or politically relevant at the moment. If I’ve thought about trolley problems and whatnot before, and have formed abstract beliefs about how to act in certain situations, then I should be much less biased when an issue comes up at work or a tough decision needs to be made at home, or there’s a new political event and I’m forming an opinion. More importantly, the same should be true of reasoning about the far future, or anything else.