Exactly—Aumann has the same evidence that you or I have about materialist scientific facts, yet chooses not to utilize x-rationality to accurately evaluate his beliefs.
While I can’t interview Aquinas about the reasons he believed in God, I’m sure the things you listed were causally important. However, if he had had x-rationality, the other elements wouldn’t have made a difference—in some sense, x-rationality is a way of getting around the limitations of a particular culture and time.
Do you think a general AI would have any difficulty disbelieving in God, even if it had been “raised” in a culture in which belief was common and incentivized?
That probably depends on what you mean by “a general AI”. We humans are (approximately) general natural intelligences (indeed, that’s almost the definition of what many people mean by “general” in this context), and plenty of humans have lots of difficulty disbelieving in God. If you mean an AI whose intelligence and knowledge are greatly superhuman, emerging from a human culture in which belief in God is common, then I expect it would (knowing its own intellectual superiority to us) have little difficulty escaping from the cultural presumption of theism. As for a culture of superhuman AIs in which theism was common, I don’t know; the mere existence of such a culture would be extremely interesting and good evidence for something surprising (which might or might not be theism).
By the way, I find it hard to imagine a culture of superhuman AIs in which theism is common. I’d be interested to talk a little more about how that would work—in particular, what evidence each AI would accept from other AIs that would convince them to be a theist.
Exactly—Aumann has the same evidence that you or I have about materialist scientific facts, yet chooses not to utilize x-rationality to accurately evaluate his beliefs.
While I can’t interview Aquinas about the reasons he believed in God, I’m sure the things you listed were causally important. However, if he had had x-rationality, the other elements wouldn’t have made a difference—in some sense, x-rationality is a way of getting around the limitations of a particular culture and time.
Do you think a general AI would have any difficulty disbelieving in God, even if it had been “raised” in a culture in which belief was common and incentivized?
That probably depends on what you mean by “a general AI”. We humans are (approximately) general natural intelligences (indeed, that’s almost the definition of what many people mean by “general” in this context), and plenty of humans have lots of difficulty disbelieving in God. If you mean an AI whose intelligence and knowledge are greatly superhuman, emerging from a human culture in which belief in God is common, then I expect it would (knowing its own intellectual superiority to us) have little difficulty escaping from the cultural presumption of theism. As for a culture of superhuman AIs in which theism was common, I don’t know; the mere existence of such a culture would be extremely interesting and good evidence for something surprising (which might or might not be theism).
I mean an AI that follows Eliezer’s general outlines of one; that is, an AI which can extrapolate maximally from a given set of evidence.
By the way, I find it hard to imagine a culture of superhuman AIs in which theism is common. I’d be interested to talk a little more about how that would work—in particular, what evidence each AI would accept from other AIs that would convince them to be a theist.
Yeah, me too. That was rather my point.