My thinking of late is that if you embrace rationality as your raison d’etre, you almost inevitably conclude that human beings must be exterminated. This extermination is sometimes given a progressive spin by calling it “transhumanism” or “the Singularity”, but that doesn’t fundamentally change its nature.
To dismiss so many aspects of our humanity as “biases” is to dismiss humanity itself. The genius of irrationality is that it doesn’t get lost in these genocidal cul-de-sacs nor in the strange loops of Godelian undecidability in trying to derive a value system from first principles (I have no idea what this sentence means). Civilizations based on the irrational revelations of prophets have proven themselves to be more successful and appealing over a longer period of time than any rationalist society to date. As we speak, the vast majority of humans being born are not adopting, and never will adopt, a rational belief system in place of religion. Rationalists are quite literally a dying breed. This leads me to conclude that the rationalist optimism of post-Enlightenment civilization was a historical accident and a brief bubble, and that we’ll be returning to our primordial state of irrationality going forward.
It’s fun to fantasize about transcending the human condition via science and technology, but I’m skeptical in the extreme that such a thing will happen—at least in a way that is not repugnant to most current value systems.
if you embrace rationality as your raison d’etre, you almost inevitably conclude that human beings must be exterminated.
Let me try to guess your reasoning. If you have “I want to be rational” as one of your terminal values, you will decide that your human brain is a mere hindrance, and so you will turn yourself into a rational robot. But since we are talking about human values, it should be noted that smelling flowers, love, and having family are also among your terminal values. So this robot would still enjoy smelling flowers, love, and having family—after all, if you value doing something, you wouldn’t want to stop liking it, because if you didn’t like it you would stop doing it.
But then, because rational agents always get stuck in genocidal cul-de-sacs, this robot who still feels love is overwhelmed by the need to kill all humans, leading to the extermination of the human race.
Since I probably wasn’t close at all, maybe you could explain?
Civilizations based on the irrational revelations of prophets have proven themselves to be more successful and appealing over a longer period of time than any rationalist society to date.
Depends what you mean by “based on” (and to a lesser extent “prophet” if you want to argue about North Korea, China and the old USSR). People seem to prefer, for example, America over Iran as a place to live.
As we speak, the vast majority of humans being born are not adopting, and never will adopt, a rational belief system in place of religion. Rationalists are quite literally a dying breed.
Hang on, that’s a bit of a non-sequiter. Just because rationalists won’t become a majority within the current generational cohort doesn’t mean we’re shrinking in number, or even in proportion. I haven’t seen the statistics for other countries (where coercion and violence likely play some role in religious matters) but in Western nations non religious people have been increasing in number. In my own nation we’re seeing the priesthood age and shrink (implying that the proportion of “religious” people committed enough to make it their career is falling) and in my city Adelaide, the “City of Churches”, quite a few have been converted into shops and nightclubs.
It’s fun to fantasize about transcending the human condition via [contextual proxy for rationality], but I’m skeptical in the extreme that such a thing will happen—at least in a way that is not repugnant to most current value systems.
What would the least repugnant possible future look like? -- keeping in mind that all the details of such a future would have to actually hold together? (Since “least repugnant possible”, taken literally, would mean the details would hold together by coincidence, consider instead a future that were, say, one-in-a-billion for its non-repugnance.) If bringing about the least repugnant future you could were your only goal, what would you do—what actions would you take?
When I imagine those actions, they resemble rationality, including trying to develop formal methods to understand as best you can which parts of the world are value systems which deserve to be taken into account for purposes of defining repugnance, how to avoid missing or persistently disregarding any value systems that deserved to be taken into account, how to take those value systems into account even where they seem to contradict each other, and how to avoid missing or persistently disregarding major implications of those value systems; as well as being very careful not to gloss over flaws in your formal methods or overall approach—especially foundational problems like Gödelian undecidability, unsystematic use of reflection, bounded rationality, and definition of slippery concepts like “repugnant” --, in case the flaws point to a better alternative.
What do the actions of someone whose only goal was to bring about the least repugnant future they could resemble when you imagine them?
(How much repugnantness is there in the “default”/”normal”/”if only it could be normal” future you imagine? Is that amount of repugnantness the amount you take for granted—do you assume that no substantially less repugnant future is achievable, and do you assume that to safely achieve a future at least roughly that non-repugnant would not generally require doing anything unprecedented? How repugnant would a typical future be in which humanity had preventably gone extinct because of irrationality, how repugnant would a future be in which humanity had gone extinct because of a preventable choice for repugnance-insensitive rationality, and how relatively likely would these extinctions be under the two conditions of global irrationality and global repugnance-insensitive rationality? Would a person who cared about the repugnance of the future, when choosing between advocacy of reason and of unreason, try to think over effects like this and try to take them into account, given that the repugnance of the future was at stake?)
I feel like this is close to the heart of a lot of concerns here: really it’s a restatement of the Friendly AI problem, no?
The back door seems to always be that rationality is “winning” and therefore if you find yourself getting caught up in an unpleasant loop, you stop and reexamine. So we should just be on the lookout for what’s happy and joyful and right—
But I fear there’s a Catch 22 there in that the more on the lookout you are, the further you wander from a place where you can really experience these things.
I want to disagree that “post-Enlightenment civilization [is] a historical bubble” because I think civilization today is at least partially stable (maybe less so in the US than elsewhere). I, of course, can’t be to certain without some wildly dictatorial world policy experiments, but curing diseases and supporting general human rights seem like positive “superhuman” steps that could stably exist.
Well if rationality were traded on an exchange the irrational expectations for it probably did peak during the enlightenment, but I don’t know what that really means to us now. The value reason has brought us is still accumulating, and with that reason’s power to produce value is also accumulating.
I’m not sure I follow your first notion, but I don’t doubt that rationality is still marginally profitable. I suppose you could couch my concerns as whether there is a critical point in rationality profit: at some point does become more rational cause more loss in our value system than gain? If so, do we toss out rationality or do we toss out our values?
And if it’s the latter, how do you continue to interact with those who didn’t follow in your footsteps? Create a (self defeating) religion?
Well it would be surprising to me if becoming more or less rational had no impact on one’s value system, but if we hold that constant and we imagine rationality was a linear progression then certainly it is possible that at some points as that line moves up, the awesomeness trend-line is moving down.
My thinking of late is that if you embrace rationality as your raison d’etre, you almost inevitably conclude that human beings must be exterminated. This extermination is sometimes given a progressive spin by calling it “transhumanism” or “the Singularity”, but that doesn’t fundamentally change its nature.
To dismiss so many aspects of our humanity as “biases” is to dismiss humanity itself. The genius of irrationality is that it doesn’t get lost in these genocidal cul-de-sacs nor in the strange loops of Godelian undecidability in trying to derive a value system from first principles (I have no idea what this sentence means). Civilizations based on the irrational revelations of prophets have proven themselves to be more successful and appealing over a longer period of time than any rationalist society to date. As we speak, the vast majority of humans being born are not adopting, and never will adopt, a rational belief system in place of religion. Rationalists are quite literally a dying breed. This leads me to conclude that the rationalist optimism of post-Enlightenment civilization was a historical accident and a brief bubble, and that we’ll be returning to our primordial state of irrationality going forward.
It’s fun to fantasize about transcending the human condition via science and technology, but I’m skeptical in the extreme that such a thing will happen—at least in a way that is not repugnant to most current value systems.
Let me try to guess your reasoning. If you have “I want to be rational” as one of your terminal values, you will decide that your human brain is a mere hindrance, and so you will turn yourself into a rational robot. But since we are talking about human values, it should be noted that smelling flowers, love, and having family are also among your terminal values. So this robot would still enjoy smelling flowers, love, and having family—after all, if you value doing something, you wouldn’t want to stop liking it, because if you didn’t like it you would stop doing it.
But then, because rational agents always get stuck in genocidal cul-de-sacs, this robot who still feels love is overwhelmed by the need to kill all humans, leading to the extermination of the human race.
Since I probably wasn’t close at all, maybe you could explain?
Depends what you mean by “based on” (and to a lesser extent “prophet” if you want to argue about North Korea, China and the old USSR). People seem to prefer, for example, America over Iran as a place to live.
Hang on, that’s a bit of a non-sequiter. Just because rationalists won’t become a majority within the current generational cohort doesn’t mean we’re shrinking in number, or even in proportion. I haven’t seen the statistics for other countries (where coercion and violence likely play some role in religious matters) but in Western nations non religious people have been increasing in number. In my own nation we’re seeing the priesthood age and shrink (implying that the proportion of “religious” people committed enough to make it their career is falling) and in my city Adelaide, the “City of Churches”, quite a few have been converted into shops and nightclubs.
What would the least repugnant possible future look like? -- keeping in mind that all the details of such a future would have to actually hold together? (Since “least repugnant possible”, taken literally, would mean the details would hold together by coincidence, consider instead a future that were, say, one-in-a-billion for its non-repugnance.) If bringing about the least repugnant future you could were your only goal, what would you do—what actions would you take?
When I imagine those actions, they resemble rationality, including trying to develop formal methods to understand as best you can which parts of the world are value systems which deserve to be taken into account for purposes of defining repugnance, how to avoid missing or persistently disregarding any value systems that deserved to be taken into account, how to take those value systems into account even where they seem to contradict each other, and how to avoid missing or persistently disregarding major implications of those value systems; as well as being very careful not to gloss over flaws in your formal methods or overall approach—especially foundational problems like Gödelian undecidability, unsystematic use of reflection, bounded rationality, and definition of slippery concepts like “repugnant” --, in case the flaws point to a better alternative.
What do the actions of someone whose only goal was to bring about the least repugnant future they could resemble when you imagine them?
(How much repugnantness is there in the “default”/”normal”/”if only it could be normal” future you imagine? Is that amount of repugnantness the amount you take for granted—do you assume that no substantially less repugnant future is achievable, and do you assume that to safely achieve a future at least roughly that non-repugnant would not generally require doing anything unprecedented? How repugnant would a typical future be in which humanity had preventably gone extinct because of irrationality, how repugnant would a future be in which humanity had gone extinct because of a preventable choice for repugnance-insensitive rationality, and how relatively likely would these extinctions be under the two conditions of global irrationality and global repugnance-insensitive rationality? Would a person who cared about the repugnance of the future, when choosing between advocacy of reason and of unreason, try to think over effects like this and try to take them into account, given that the repugnance of the future was at stake?)
I feel like this is close to the heart of a lot of concerns here: really it’s a restatement of the Friendly AI problem, no?
The back door seems to always be that rationality is “winning” and therefore if you find yourself getting caught up in an unpleasant loop, you stop and reexamine. So we should just be on the lookout for what’s happy and joyful and right—
But I fear there’s a Catch 22 there in that the more on the lookout you are, the further you wander from a place where you can really experience these things.
I want to disagree that “post-Enlightenment civilization [is] a historical bubble” because I think civilization today is at least partially stable (maybe less so in the US than elsewhere). I, of course, can’t be to certain without some wildly dictatorial world policy experiments, but curing diseases and supporting general human rights seem like positive “superhuman” steps that could stably exist.
Well if rationality were traded on an exchange the irrational expectations for it probably did peak during the enlightenment, but I don’t know what that really means to us now. The value reason has brought us is still accumulating, and with that reason’s power to produce value is also accumulating.
I’m not sure I follow your first notion, but I don’t doubt that rationality is still marginally profitable. I suppose you could couch my concerns as whether there is a critical point in rationality profit: at some point does become more rational cause more loss in our value system than gain? If so, do we toss out rationality or do we toss out our values?
And if it’s the latter, how do you continue to interact with those who didn’t follow in your footsteps? Create a (self defeating) religion?
Well it would be surprising to me if becoming more or less rational had no impact on one’s value system, but if we hold that constant and we imagine rationality was a linear progression then certainly it is possible that at some points as that line moves up, the awesomeness trend-line is moving down.