Moral skepticism is not particularly impressive as it’s the simplest hypothesis. Certainly, it seems extremely hard to square moral realism with our immensely successful scientific picture of a material universe.
The problem is that we still must choose how to act. Without a morality, all we can say is that we prefer to act in some arbitrary way, much as we might arbitrarily prefer one food to another. And...that’s it. We can make no criticism whatsoever about the actions of others, not even that they should act rationally. We cannot say that striving for truth is any better than killing babies (or following a religion?) anymore than we can say green is a better color than red.
At best we can make empirical statements of the form “A person should act in such-and-such manner in order to achieve some outcome”.
Some people are prepared to bite this bullet. Yet most who say they do continue to behave as if they believed their actions were more than arbitrary preferences.
My point is that people striving to be rational should bite this bullet. As you point out, this might cause some problems—which is the challenge I propose that rationalists should take on.
You may wish to think of your actions as non-arbitrary (that is, justified in some special way, cf. the link Nick Tarleton provided), and you may wish to (non-arbitrarily) criticize the actions of others etc. But wishing doesn’t make it so. You may find it disturbing that you can’t “non-arbitrarily” say that “striving for truth is better than killing babies”. This kind of thing prompts most people to shy away from moral skepticism, but if you are concerned with rationality, you should hold yourself to a higher standard than that.
I think the problem is much more profound than you suggest. It is not something that rationalists can simply take on with a non-infinitesimal confidence that progress will be made. Certainly not amateur rationalists doing philosophy in their spare time (not that this isn’t healthy). I don’t mean to say that rationalists should give up, but we have to choose how to act in the meantime.
Personally, I find the situation so desperate that I am prepared to simply assume moral realism when I am deciding how to act, with the knowledge that this assumption is very implausible. I don’t believe this makes me irrational. In fact, given our current understanding of the problem, I don’t know of any other reasonable approaches.
Incidentally, this position is reminiscent of both Pascal’s wager and of an attitude towards morality and AI which Eliezer claimed to previously hold but now rejects as flawed.
I’ve read it before. Though I have much respect for Eliezer, I think his excursions into moral philosophy are very poor. They show a lack of awareness that all the issues he raises have been hashed out decades or centuries ago at a much higher level by philosophers, both moral realists and otherwise. I’m sure he believes that he brings some new insights, but I would disagree.
Moral skepticism is not particularly impressive as it’s the simplest hypothesis. Certainly, it seems extremely hard to square moral realism with our immensely successful scientific picture of a material universe.
The problem is that we still must choose how to act. Without a morality, all we can say is that we prefer to act in some arbitrary way, much as we might arbitrarily prefer one food to another. And...that’s it. We can make no criticism whatsoever about the actions of others, not even that they should act rationally. We cannot say that striving for truth is any better than killing babies (or following a religion?) anymore than we can say green is a better color than red.
At best we can make empirical statements of the form “A person should act in such-and-such manner in order to achieve some outcome”.
Some people are prepared to bite this bullet. Yet most who say they do continue to behave as if they believed their actions were more than arbitrary preferences.
My point is that people striving to be rational should bite this bullet. As you point out, this might cause some problems—which is the challenge I propose that rationalists should take on.
You may wish to think of your actions as non-arbitrary (that is, justified in some special way, cf. the link Nick Tarleton provided), and you may wish to (non-arbitrarily) criticize the actions of others etc. But wishing doesn’t make it so. You may find it disturbing that you can’t “non-arbitrarily” say that “striving for truth is better than killing babies”. This kind of thing prompts most people to shy away from moral skepticism, but if you are concerned with rationality, you should hold yourself to a higher standard than that.
I think the problem is much more profound than you suggest. It is not something that rationalists can simply take on with a non-infinitesimal confidence that progress will be made. Certainly not amateur rationalists doing philosophy in their spare time (not that this isn’t healthy). I don’t mean to say that rationalists should give up, but we have to choose how to act in the meantime.
Personally, I find the situation so desperate that I am prepared to simply assume moral realism when I am deciding how to act, with the knowledge that this assumption is very implausible. I don’t believe this makes me irrational. In fact, given our current understanding of the problem, I don’t know of any other reasonable approaches.
Incidentally, this position is reminiscent of both Pascal’s wager and of an attitude towards morality and AI which Eliezer claimed to previously hold but now rejects as flawed.
OB: “Arbitrary”
(Wait, Eliezer’s OB posts have been imported to LW? Win!)
I’ve read it before. Though I have much respect for Eliezer, I think his excursions into moral philosophy are very poor. They show a lack of awareness that all the issues he raises have been hashed out decades or centuries ago at a much higher level by philosophers, both moral realists and otherwise. I’m sure he believes that he brings some new insights, but I would disagree.