We typically imagine CEV asking what people would do if they ‘knew what the AI knew’ - let’s say the AI tries to estimate expected value of a given action, with utility defined by extrapolated versions of us who know the truth, and probabilities taken from the AI’s own distribution. I am absolutely saying that theism fails under any credible epistemology, and any well-programmed FAI would expect ‘more knowledgeable versions of us’ to become atheists on general principles. Whether or not this means they would change “if they knew all the arguments for and against religion,” depends on whether or not they can accept some extremely basic premise.
(Note that nobody comes into the word with anything even vaguely resembling a prior that favors a major religion. We might start with a bias in favor of animism, but nearly everyone would verbally agree this anthropomorphism is false.)
It seems much less clear if CEV would make psychopathy irrelevant. But potential victims must object to their own suffering at least as much as real-world psychopaths want to hurt them. So the most obvious worst-case scenario, under implausibly cynical premises, looks more like Omelas than it does a Mongol invasion. (Here I’m completely ignoring the clause meant to address such scenarios, “had grown up farther together”.)
We typically imagine CEV asking what people would do if they ‘knew what the AI knew’
No, we don’t, because this would be a stupid question. CEV doesn’t ask people, CEV tells people what they want.
any well-programmed FAI would expect ‘more knowledgeable versions of us’ to become atheists on general principles.
I see little evidence to support this point of view. You might think that atheism is obvious, but a great deal of people, many of them smarter than you, disagree.
We typically imagine CEV asking what people would do if they ‘knew what the AI knew’ - let’s say the AI tries to estimate expected value of a given action, with utility defined by extrapolated versions of us who know the truth, and probabilities taken from the AI’s own distribution. I am absolutely saying that theism fails under any credible epistemology, and any well-programmed FAI would expect ‘more knowledgeable versions of us’ to become atheists on general principles. Whether or not this means they would change “if they knew all the arguments for and against religion,” depends on whether or not they can accept some extremely basic premise.
(Note that nobody comes into the word with anything even vaguely resembling a prior that favors a major religion. We might start with a bias in favor of animism, but nearly everyone would verbally agree this anthropomorphism is false.)
It seems much less clear if CEV would make psychopathy irrelevant. But potential victims must object to their own suffering at least as much as real-world psychopaths want to hurt them. So the most obvious worst-case scenario, under implausibly cynical premises, looks more like Omelas than it does a Mongol invasion. (Here I’m completely ignoring the clause meant to address such scenarios, “had grown up farther together”.)
No, we don’t, because this would be a stupid question. CEV doesn’t ask people, CEV tells people what they want.
I see little evidence to support this point of view. You might think that atheism is obvious, but a great deal of people, many of them smarter than you, disagree.