The compartmentalization of information is anything but safe.
I agree in most cases; however, there are some cases where ideas are very Big and Scary and Important where a full propagation through your explicit reasoning causes you to go nuts. This has happened to multiple people on Less Wrong, whom I will not name for obvious reasons.
I would like to emphasize that I agree in most cases. Compartmentalization is bad.
I think it happens due to ideas being wrong and/or being propagated incorrectly. Basically, you would need extremely high confidence in a very big and scary idea, before it can overwrite anything. The MWI is very big and scary. Provisionally, before I develop moral system based on MWI, it is perfectly consistent to assume that it has probability of being wrong, q, and the relative morality of actions, unknown under MWI, and known under SI, does not change, and consequently no moral decision (involving comparison of moral values) changes before there is a high quality moral system based on MWI. As a quick hack moral system based on MWI is likely to be considerably incorrect and lead to rash actions (e.g. quantum suicide that actually turns out to be as bad as normal suicide after you figure stuff out)
The ship is compartmentalized against hole in the hull, not against something great happening to it. Incorrect idea with high confidence can be a hole in the hull; the water be the resulting nonsense overriding the system.
I agree in most cases; however, there are some cases where ideas are very Big and Scary and Important where a full propagation through your explicit reasoning causes you to go nuts. This has happened to multiple people on Less Wrong, whom I will not name for obvious reasons.
I would like to emphasize that I agree in most cases. Compartmentalization is bad.
I think it happens due to ideas being wrong and/or being propagated incorrectly. Basically, you would need extremely high confidence in a very big and scary idea, before it can overwrite anything. The MWI is very big and scary. Provisionally, before I develop moral system based on MWI, it is perfectly consistent to assume that it has probability of being wrong, q, and the relative morality of actions, unknown under MWI, and known under SI, does not change, and consequently no moral decision (involving comparison of moral values) changes before there is a high quality moral system based on MWI. As a quick hack moral system based on MWI is likely to be considerably incorrect and lead to rash actions (e.g. quantum suicide that actually turns out to be as bad as normal suicide after you figure stuff out)
The ship is compartmentalized against hole in the hull, not against something great happening to it. Incorrect idea with high confidence can be a hole in the hull; the water be the resulting nonsense overriding the system.