I feel like this is close to the heart of a lot of concerns here: really it’s a restatement of the Friendly AI problem, no?
The back door seems to always be that rationality is “winning” and therefore if you find yourself getting caught up in an unpleasant loop, you stop and reexamine. So we should just be on the lookout for what’s happy and joyful and right—
But I fear there’s a Catch 22 there in that the more on the lookout you are, the further you wander from a place where you can really experience these things.
I want to disagree that “post-Enlightenment civilization [is] a historical bubble” because I think civilization today is at least partially stable (maybe less so in the US than elsewhere). I, of course, can’t be to certain without some wildly dictatorial world policy experiments, but curing diseases and supporting general human rights seem like positive “superhuman” steps that could stably exist.
Well if rationality were traded on an exchange the irrational expectations for it probably did peak during the enlightenment, but I don’t know what that really means to us now. The value reason has brought us is still accumulating, and with that reason’s power to produce value is also accumulating.
I’m not sure I follow your first notion, but I don’t doubt that rationality is still marginally profitable. I suppose you could couch my concerns as whether there is a critical point in rationality profit: at some point does become more rational cause more loss in our value system than gain? If so, do we toss out rationality or do we toss out our values?
And if it’s the latter, how do you continue to interact with those who didn’t follow in your footsteps? Create a (self defeating) religion?
Well it would be surprising to me if becoming more or less rational had no impact on one’s value system, but if we hold that constant and we imagine rationality was a linear progression then certainly it is possible that at some points as that line moves up, the awesomeness trend-line is moving down.
I feel like this is close to the heart of a lot of concerns here: really it’s a restatement of the Friendly AI problem, no?
The back door seems to always be that rationality is “winning” and therefore if you find yourself getting caught up in an unpleasant loop, you stop and reexamine. So we should just be on the lookout for what’s happy and joyful and right—
But I fear there’s a Catch 22 there in that the more on the lookout you are, the further you wander from a place where you can really experience these things.
I want to disagree that “post-Enlightenment civilization [is] a historical bubble” because I think civilization today is at least partially stable (maybe less so in the US than elsewhere). I, of course, can’t be to certain without some wildly dictatorial world policy experiments, but curing diseases and supporting general human rights seem like positive “superhuman” steps that could stably exist.
Well if rationality were traded on an exchange the irrational expectations for it probably did peak during the enlightenment, but I don’t know what that really means to us now. The value reason has brought us is still accumulating, and with that reason’s power to produce value is also accumulating.
I’m not sure I follow your first notion, but I don’t doubt that rationality is still marginally profitable. I suppose you could couch my concerns as whether there is a critical point in rationality profit: at some point does become more rational cause more loss in our value system than gain? If so, do we toss out rationality or do we toss out our values?
And if it’s the latter, how do you continue to interact with those who didn’t follow in your footsteps? Create a (self defeating) religion?
Well it would be surprising to me if becoming more or less rational had no impact on one’s value system, but if we hold that constant and we imagine rationality was a linear progression then certainly it is possible that at some points as that line moves up, the awesomeness trend-line is moving down.