So your argument is that the reason that the theists are wrong is because they only sorta-kinda believe in God anyway, but if they really believed, then they’d be just as right as we are?
But only in the sense that their calculation could be correct according to a particularly weird prior. The difference between normal theist and a “god-impressed mind” who both believe in God is that of rationality: the former makes mistakes in updating beliefs, the latter probably doesn’t. The same with an atheist god-impressed mind and a human atheist. You can’t expect to find that weird a prior in a human. And of course, you should say that the god-impressed are wrong about their beliefs, though they correctly follow the evidence according to their prior. If you value their success in the real world more than the autonomy of their preference, you may want to reach into their minds and make appropriate changes.
I should say again: the program that defines the decision-making algorithm can’t be normally changed, which means that one can’t be really “converted” to a different preference, though one can be converted to different beliefs and feelings. Observations don’t change the algorithm, they are processed according to that algorithm. This means that if you care about reflective consistency (and everyone does, in the sense of preservation of preference), you’d try to counteract the unwanted effects of environment on yourself, including the self-promoting effects where you start liking the new situation. The extent to which you like the new situation, the “level of conviction”, it’s pretty much irrelevant, just as the presence of a losing psychological drive. It’d take great integrity (not “strength of conviction”) in the change for significantly different values to really sink in, in the sense that the new preference-on-reflection will resemble the new beliefs and feelings similarly to how the native preference-on-reflection will resemble native (sane, secular, etc.) beliefs and feelings.
Yes, that wasn’t careful. In this context, I mean “no large shift of preference”. Tiny changes occur all the time (and are actually very important if you scale them up by giving the preference with/without these changes to a FAI). You can model the extent of reversibility (as compared to a formal computer program) by roughly what can be inferred about the person’s past, which doesn’t necessarily all has to be from the person’s brain. (By an algorithm in human brain I mean all of human brain, basically a program that would run an upload implementation, together with the data.)
But only in the sense that their calculation could be correct according to a particularly weird prior. The difference between normal theist and a “god-impressed mind” who both believe in God is that of rationality: the former makes mistakes in updating beliefs, the latter probably doesn’t. The same with an atheist god-impressed mind and a human atheist. You can’t expect to find that weird a prior in a human. And of course, you should say that the god-impressed are wrong about their beliefs, though they correctly follow the evidence according to their prior. If you value their success in the real world more than the autonomy of their preference, you may want to reach into their minds and make appropriate changes.
I should say again: the program that defines the decision-making algorithm can’t be normally changed, which means that one can’t be really “converted” to a different preference, though one can be converted to different beliefs and feelings. Observations don’t change the algorithm, they are processed according to that algorithm. This means that if you care about reflective consistency (and everyone does, in the sense of preservation of preference), you’d try to counteract the unwanted effects of environment on yourself, including the self-promoting effects where you start liking the new situation. The extent to which you like the new situation, the “level of conviction”, it’s pretty much irrelevant, just as the presence of a losing psychological drive. It’d take great integrity (not “strength of conviction”) in the change for significantly different values to really sink in, in the sense that the new preference-on-reflection will resemble the new beliefs and feelings similarly to how the native preference-on-reflection will resemble native (sane, secular, etc.) beliefs and feelings.
I doubt that you can define a way to choose an algorithm out of a human brain that makes that sentence true.
Yes, that wasn’t careful. In this context, I mean “no large shift of preference”. Tiny changes occur all the time (and are actually very important if you scale them up by giving the preference with/without these changes to a FAI). You can model the extent of reversibility (as compared to a formal computer program) by roughly what can be inferred about the person’s past, which doesn’t necessarily all has to be from the person’s brain. (By an algorithm in human brain I mean all of human brain, basically a program that would run an upload implementation, together with the data.)