Religion, FPTP elections and wars are irrational even according to non-x rationality. (With all sorts of caveats, which apply just as much to x-rationality.) The DDT ban thing is a myth, as ciphergoth points out. Asteroids and cryogenics, maybe, in so far as making the right decisions there probably involve a large element of Shut Up And Multiply; but actually we are making some effort to spot asteroids early enough and the probabilities governing whether one should sign up for cryogenics are highly debatable.
Perhaps more x-rationality would help humanity as a whole to address those issues, but mostly they come about because so many people aren’t even rational, never mind x-rational.
Perhaps—but many a logician has believed in God. Take somebody like Thomas Aquinas—he was for a long time the paradigm of rationality. I’d suggest it takes x-rationality to truly shatter your pre-existing losing framework and re-examine your priors.
Do you have evidence that it was lack of x-rationality that enabled Aquinas to believe in God, rather than (1) different evidence from what we have now (e.g., no long track record of outstandingly successful materialistic science; no evolutionary biology to provide an alternative explanation for the adaptation of living things; no geological investigations to show that the earth is very much older than Aquinas’s religious beliefs said it was) and (2) being embedded in a culture that pushed him much harder towards belief in God than ours does to us?
Robert Aumann, to take an example Eliezer’s used a few times, is pretty expert in at least some aspects of the art of x-rationality, and is also Orthodox Jewish.
Exactly—Aumann has the same evidence that you or I have about materialist scientific facts, yet chooses not to utilize x-rationality to accurately evaluate his beliefs.
While I can’t interview Aquinas about the reasons he believed in God, I’m sure the things you listed were causally important. However, if he had had x-rationality, the other elements wouldn’t have made a difference—in some sense, x-rationality is a way of getting around the limitations of a particular culture and time.
Do you think a general AI would have any difficulty disbelieving in God, even if it had been “raised” in a culture in which belief was common and incentivized?
That probably depends on what you mean by “a general AI”. We humans are (approximately) general natural intelligences (indeed, that’s almost the definition of what many people mean by “general” in this context), and plenty of humans have lots of difficulty disbelieving in God. If you mean an AI whose intelligence and knowledge are greatly superhuman, emerging from a human culture in which belief in God is common, then I expect it would (knowing its own intellectual superiority to us) have little difficulty escaping from the cultural presumption of theism. As for a culture of superhuman AIs in which theism was common, I don’t know; the mere existence of such a culture would be extremely interesting and good evidence for something surprising (which might or might not be theism).
By the way, I find it hard to imagine a culture of superhuman AIs in which theism is common. I’d be interested to talk a little more about how that would work—in particular, what evidence each AI would accept from other AIs that would convince them to be a theist.
Religion, FPTP elections and wars are irrational even according to non-x rationality. (With all sorts of caveats, which apply just as much to x-rationality.) The DDT ban thing is a myth, as ciphergoth points out. Asteroids and cryogenics, maybe, in so far as making the right decisions there probably involve a large element of Shut Up And Multiply; but actually we are making some effort to spot asteroids early enough and the probabilities governing whether one should sign up for cryogenics are highly debatable.
Perhaps more x-rationality would help humanity as a whole to address those issues, but mostly they come about because so many people aren’t even rational, never mind x-rational.
Perhaps—but many a logician has believed in God. Take somebody like Thomas Aquinas—he was for a long time the paradigm of rationality. I’d suggest it takes x-rationality to truly shatter your pre-existing losing framework and re-examine your priors.
Do you have evidence that it was lack of x-rationality that enabled Aquinas to believe in God, rather than (1) different evidence from what we have now (e.g., no long track record of outstandingly successful materialistic science; no evolutionary biology to provide an alternative explanation for the adaptation of living things; no geological investigations to show that the earth is very much older than Aquinas’s religious beliefs said it was) and (2) being embedded in a culture that pushed him much harder towards belief in God than ours does to us?
Robert Aumann, to take an example Eliezer’s used a few times, is pretty expert in at least some aspects of the art of x-rationality, and is also Orthodox Jewish.
Exactly—Aumann has the same evidence that you or I have about materialist scientific facts, yet chooses not to utilize x-rationality to accurately evaluate his beliefs.
While I can’t interview Aquinas about the reasons he believed in God, I’m sure the things you listed were causally important. However, if he had had x-rationality, the other elements wouldn’t have made a difference—in some sense, x-rationality is a way of getting around the limitations of a particular culture and time.
Do you think a general AI would have any difficulty disbelieving in God, even if it had been “raised” in a culture in which belief was common and incentivized?
That probably depends on what you mean by “a general AI”. We humans are (approximately) general natural intelligences (indeed, that’s almost the definition of what many people mean by “general” in this context), and plenty of humans have lots of difficulty disbelieving in God. If you mean an AI whose intelligence and knowledge are greatly superhuman, emerging from a human culture in which belief in God is common, then I expect it would (knowing its own intellectual superiority to us) have little difficulty escaping from the cultural presumption of theism. As for a culture of superhuman AIs in which theism was common, I don’t know; the mere existence of such a culture would be extremely interesting and good evidence for something surprising (which might or might not be theism).
I mean an AI that follows Eliezer’s general outlines of one; that is, an AI which can extrapolate maximally from a given set of evidence.
By the way, I find it hard to imagine a culture of superhuman AIs in which theism is common. I’d be interested to talk a little more about how that would work—in particular, what evidence each AI would accept from other AIs that would convince them to be a theist.
Yeah, me too. That was rather my point.