we can’t be sure at this point that we have zero self-shadowing blind-spots, I think there’s a reasonable chance that’s in fact the case.
there is a single “ability to philosophize” that is sufficient (given enough raw intelligence) to overcome all such blind spots.
So, one way to attack this position would be to construct a toy-model of a system that has an “ability to philosophize” but that still fails in some cases.
An example would be the bayesian AI that self-modifies to one-box on all newcomblike problems where omega examines it after that self-modification event. So it realizes that it is better to be “rationally irrational”, but only in a limited sense.
A less controversial example is the case where some devout catholics were convinced that even thinking about whether or not god might not exist would cause them to be sent straight to hell.
For any agent or community of agents, there are some cherished beliefs that the agent/community refuses to challenge. Sometimes for good reason. Even I have some, and LW certainly has some.
In this latter case, a false belief shields itself from criticism by convincing the community or agent involved that even the act of questioning the belief is of highly negative value.
So, one way to attack this position would be to construct a toy-model of a system that has an “ability to philosophize” but that still fails in some cases.
An example would be the bayesian AI that self-modifies to one-box on all newcomblike problems where omega examines it after that self-modification event. So it realizes that it is better to be “rationally irrational”, but only in a limited sense.
A less controversial example is the case where some devout catholics were convinced that even thinking about whether or not god might not exist would cause them to be sent straight to hell.
For any agent or community of agents, there are some cherished beliefs that the agent/community refuses to challenge. Sometimes for good reason. Even I have some, and LW certainly has some.
In this latter case, a false belief shields itself from criticism by convincing the community or agent involved that even the act of questioning the belief is of highly negative value.