My point is that if you don’t have some of those general / meta beliefs described in this post, you will generally take much worse decisions, in a way that will often be known by you intuitively, but not by your explicit reasoning (which is dangerous if you don’t take your intuitive warning signal seriously).
Let’s assume you’re someone that doesn’t know the answer to the question I asked (or the information in the specific answer I gave).
Here are examples of what could go wrong.
Example 1
If you believe that a discontinuity in consciousness means you die, and when consciousness is reestablished in the brain, another mind is instantiated that is a copy of you. Then you might decide to not go back to sleep until you actually, biologically die from sleep deprivation.
While this could be the actual optimal choice, even taking into account this post, it seems likely to me that taking into account information in this post could change one’s mind from ‘not sleeping at all’ to ‘keeping normal sleeping habit’.
Some approach to moral uncertainty might actually recommend sleeping even if you’re rather confident it will kill you because: % you care about discontinuity * how long you can go without sleeping << % you don’t care about discontinuities * how long you can live if you sleep.
But if you don’t know about how to integrate uncertainty at the model level in your reasoning, then you might just act based on your belief that sleep kills, and so stop sleeping. This error mode could severely affect a lot of people around me based on the ‘object-level’ beliefs I see shared around.
I’ve written more about this here, but I have made the post private for now as I’m revisiting whether it contains info-hazard.
Example 2
If you don’t see any error with Pascal’s mugging, and so you decide to act on its logical implications, then a mugger might rob you of everything, and render you a complete slave.
Actually, I’m not sure if I have a defense mechanism to propose for this one, beside knowing the resolution of the problem before / at the same time than being introduced to the problem. But one could argue that “your intuitions that this is wrong” would be a good defense mechanism against explicit reasoning going astray.
My point is that if you don’t have some of those general / meta beliefs described in this post, you will generally take much worse decisions, in a way that will often be known by you intuitively, but not by your explicit reasoning (which is dangerous if you don’t take your intuitive warning signal seriously).
Let’s assume you’re someone that doesn’t know the answer to the question I asked (or the information in the specific answer I gave).
Here are examples of what could go wrong.
Example 1
If you believe that a discontinuity in consciousness means you die, and when consciousness is reestablished in the brain, another mind is instantiated that is a copy of you. Then you might decide to not go back to sleep until you actually, biologically die from sleep deprivation.
While this could be the actual optimal choice, even taking into account this post, it seems likely to me that taking into account information in this post could change one’s mind from ‘not sleeping at all’ to ‘keeping normal sleeping habit’.
Some approach to moral uncertainty might actually recommend sleeping even if you’re rather confident it will kill you because: % you care about discontinuity * how long you can go without sleeping << % you don’t care about discontinuities * how long you can live if you sleep.
But if you don’t know about how to integrate uncertainty at the model level in your reasoning, then you might just act based on your belief that sleep kills, and so stop sleeping. This error mode could severely affect a lot of people around me based on the ‘object-level’ beliefs I see shared around.
I’ve written more about this here, but I have made the post private for now as I’m revisiting whether it contains info-hazard.
Example 2
If you don’t see any error with Pascal’s mugging, and so you decide to act on its logical implications, then a mugger might rob you of everything, and render you a complete slave.
Actually, I’m not sure if I have a defense mechanism to propose for this one, beside knowing the resolution of the problem before / at the same time than being introduced to the problem. But one could argue that “your intuitions that this is wrong” would be a good defense mechanism against explicit reasoning going astray.