I think you are right that x-rationality doesn’t help an individual win much on a day to day basis. But there are some very important challenges that humanity as a whole is failing for lack of x-rationality.
The current depression.
The fact that we aren’t adequately protecting the earth from asteroids.
DDT being banned.
Nobody’s getting froze.
Religion.
First-past-the post elections.
Most wars.
At some stage we’re going to have to work out how to talk about politics here. I’ve wondered about a top-level post to find out what we practically all agree on—I suspect for example that few of us think the drug war is a good idea.
DDT isn’t banned, never has been.. I’m with you on most everything else.
From a 1972 Environmental Protection Agency press release entitled “DDT Ban Takes Effect”:
The general use of the pesticide DDT will no longer be legal in the United States after today, ending nearly three decades of application during which time the once-popular chemical was used to control insect pests on crop and forest lands, around homes and gardens, and for industrial and commercial purposes.
Religion, FPTP elections and wars are irrational even according to non-x rationality. (With all sorts of caveats, which apply just as much to x-rationality.) The DDT ban thing is a myth, as ciphergoth points out. Asteroids and cryogenics, maybe, in so far as making the right decisions there probably involve a large element of Shut Up And Multiply; but actually we are making some effort to spot asteroids early enough and the probabilities governing whether one should sign up for cryogenics are highly debatable.
Perhaps more x-rationality would help humanity as a whole to address those issues, but mostly they come about because so many people aren’t even rational, never mind x-rational.
Perhaps—but many a logician has believed in God. Take somebody like Thomas Aquinas—he was for a long time the paradigm of rationality. I’d suggest it takes x-rationality to truly shatter your pre-existing losing framework and re-examine your priors.
Do you have evidence that it was lack of x-rationality that enabled Aquinas to believe in God, rather than (1) different evidence from what we have now (e.g., no long track record of outstandingly successful materialistic science; no evolutionary biology to provide an alternative explanation for the adaptation of living things; no geological investigations to show that the earth is very much older than Aquinas’s religious beliefs said it was) and (2) being embedded in a culture that pushed him much harder towards belief in God than ours does to us?
Robert Aumann, to take an example Eliezer’s used a few times, is pretty expert in at least some aspects of the art of x-rationality, and is also Orthodox Jewish.
Exactly—Aumann has the same evidence that you or I have about materialist scientific facts, yet chooses not to utilize x-rationality to accurately evaluate his beliefs.
While I can’t interview Aquinas about the reasons he believed in God, I’m sure the things you listed were causally important. However, if he had had x-rationality, the other elements wouldn’t have made a difference—in some sense, x-rationality is a way of getting around the limitations of a particular culture and time.
Do you think a general AI would have any difficulty disbelieving in God, even if it had been “raised” in a culture in which belief was common and incentivized?
That probably depends on what you mean by “a general AI”. We humans are (approximately) general natural intelligences (indeed, that’s almost the definition of what many people mean by “general” in this context), and plenty of humans have lots of difficulty disbelieving in God. If you mean an AI whose intelligence and knowledge are greatly superhuman, emerging from a human culture in which belief in God is common, then I expect it would (knowing its own intellectual superiority to us) have little difficulty escaping from the cultural presumption of theism. As for a culture of superhuman AIs in which theism was common, I don’t know; the mere existence of such a culture would be extremely interesting and good evidence for something surprising (which might or might not be theism).
By the way, I find it hard to imagine a culture of superhuman AIs in which theism is common. I’d be interested to talk a little more about how that would work—in particular, what evidence each AI would accept from other AIs that would convince them to be a theist.
No, people don’t only do good in the hope that good will be done to them; most people value the welfare of others and the survival of humanity inherently, at least to some extent.
I think you are right that x-rationality doesn’t help an individual win much on a day to day basis. But there are some very important challenges that humanity as a whole is failing for lack of x-rationality.
The current depression. The fact that we aren’t adequately protecting the earth from asteroids. DDT being banned. Nobody’s getting froze. Religion. First-past-the post elections. Most wars.
DDT isn’t banned, never has been. I’m with you on most everything else.
At some stage we’re going to have to work out how to talk about politics here. I’ve wondered about a top-level post to find out what we practically all agree on—I suspect for example that few of us think the drug war is a good idea.
From a 1972 Environmental Protection Agency press release entitled “DDT Ban Takes Effect”:
Religion, FPTP elections and wars are irrational even according to non-x rationality. (With all sorts of caveats, which apply just as much to x-rationality.) The DDT ban thing is a myth, as ciphergoth points out. Asteroids and cryogenics, maybe, in so far as making the right decisions there probably involve a large element of Shut Up And Multiply; but actually we are making some effort to spot asteroids early enough and the probabilities governing whether one should sign up for cryogenics are highly debatable.
Perhaps more x-rationality would help humanity as a whole to address those issues, but mostly they come about because so many people aren’t even rational, never mind x-rational.
Perhaps—but many a logician has believed in God. Take somebody like Thomas Aquinas—he was for a long time the paradigm of rationality. I’d suggest it takes x-rationality to truly shatter your pre-existing losing framework and re-examine your priors.
Do you have evidence that it was lack of x-rationality that enabled Aquinas to believe in God, rather than (1) different evidence from what we have now (e.g., no long track record of outstandingly successful materialistic science; no evolutionary biology to provide an alternative explanation for the adaptation of living things; no geological investigations to show that the earth is very much older than Aquinas’s religious beliefs said it was) and (2) being embedded in a culture that pushed him much harder towards belief in God than ours does to us?
Robert Aumann, to take an example Eliezer’s used a few times, is pretty expert in at least some aspects of the art of x-rationality, and is also Orthodox Jewish.
Exactly—Aumann has the same evidence that you or I have about materialist scientific facts, yet chooses not to utilize x-rationality to accurately evaluate his beliefs.
While I can’t interview Aquinas about the reasons he believed in God, I’m sure the things you listed were causally important. However, if he had had x-rationality, the other elements wouldn’t have made a difference—in some sense, x-rationality is a way of getting around the limitations of a particular culture and time.
Do you think a general AI would have any difficulty disbelieving in God, even if it had been “raised” in a culture in which belief was common and incentivized?
That probably depends on what you mean by “a general AI”. We humans are (approximately) general natural intelligences (indeed, that’s almost the definition of what many people mean by “general” in this context), and plenty of humans have lots of difficulty disbelieving in God. If you mean an AI whose intelligence and knowledge are greatly superhuman, emerging from a human culture in which belief in God is common, then I expect it would (knowing its own intellectual superiority to us) have little difficulty escaping from the cultural presumption of theism. As for a culture of superhuman AIs in which theism was common, I don’t know; the mere existence of such a culture would be extremely interesting and good evidence for something surprising (which might or might not be theism).
I mean an AI that follows Eliezer’s general outlines of one; that is, an AI which can extrapolate maximally from a given set of evidence.
By the way, I find it hard to imagine a culture of superhuman AIs in which theism is common. I’d be interested to talk a little more about how that would work—in particular, what evidence each AI would accept from other AIs that would convince them to be a theist.
Yeah, me too. That was rather my point.
So by spending our resources on studying rationality, we are cooperating in a giant Prisoner’s Dilemma?
No, people don’t only do good in the hope that good will be done to them; most people value the welfare of others and the survival of humanity inherently, at least to some extent.