That is, many people demonstrably do endorse and reject certain kinds of situations in a way that we are inclined to categorize as a moral (rather than an aesthetic or arbitrary or pragmatic) judgment, and demonstrably do signal those endorsements and rejections to one another.
All of that behavior has demonstrable influences on how people are born and live and die, suffer and thrive, are educated and remain ignorant, discover truths and believe falsehoods, are happy and sad, etc.
I believe all of that stuff matters, so I believe how the moral intuitions that influence that stuff are formed, and how they can be influenced, is worth understanding.
And, of course, the SIAI folks believe it matters because they want to engineer an artificial system whose decisions have predictable relationships to our moral intuitions even when it doesn’t directly consult those intuitions.
Now, maybe none of that stuff is what you’re talking about… it’s hard to tell precisely what you are rejecting the study of, actually, so I may be talking past you.
If what you mean to reject is the study of morality as something that exists in the world outside of our minds and our behavior, for example, I agree with you.
I suspect the best way to encourage the rejection of that is to study the actual roots of our moral judgments; as more and more of our judgments can be rigorously explained, there will be less and less room for a “god of the moral gaps” to explain them.
And I agree with NihilCredo that the distinction between “applied morality” and “theoretical morality” is not a stable one—especially when considering large-scale engineering projects—so refusing to consider theoretical questions simply ensures that we’re unprepared for the future.
Also, thought experiments are often useful tools to clarify what our intuitions actually are.
I think we may indeed be talking past each other, so I will try to state my case more cogently.
I am not denying that people do possess ideas about something named “morality”. It would be absurd to claim otherwise, as we are here discussing such ideas.
I am denying that, even if I accept all of their assumptions, individuals who claim these ideas as more-than-subjective—by that I think I mean that they claim their ideas able to be applied to a group rather than only to one man, the holder of the ideas—can convince me that these ideas are not wholly subjective and individual-dependent.
If it is the case that morality is individual only, then that is an interesting conclusion and something to talk about, but it does seem, at least to a first approximation, that for a judgment to be considered moral, it must have some broader applicability among individuals, rather than concerning but one person. What can Justice be if it is among one man only? This seems a critical part of what is meant by “morality”. It is in this latter, broad case, that moral philosophy appears null.
If you possess an idea of morality desire that I consider it to have some connection with the world and with all persons—and surely I must require that it have such a connection, as moral claims attempt to dictate the interaction between people, and thus cannot be content to be contained in one mind alone—at least enough of a connection that you can, through reasoned argument, convince me that your claims are both valid and sound, then surely your ideas must make reference to principles that I can discover individually to both exist and serve as predicates to your ideas. If you cannot elucidate these foundations, then how can I be brought to your view through reason? This was the intent of my original criticism, to ask why these foundations are so lousy and to beg that someone make them otherwise if moral claims are to be made.
I think that this is the crux of my objection. I cannot find moral claims that I can be brought to accept through reason alone, as even in the most impressive cases such claims are deeply infected by subjective assumptions that are incommunicable and—dare I write it? --- irrational.
(This is to change the subject somewhat, but I find that the quality of an idea that allows it to be communicated is necessary to its being considered the result of reason and objective. I use that last word with 10,000 pounds of hesitation.)
However, and now I think that we are talking to each other directly, if, when you write of moral ideas, you refer only to those ideas that currently do exist, whether logically well-constructed or not, and you say that you are interested in studying these for their effects, then I am agreed.
I certainly agree that, whether I am convinced of its validity or use, morality does exist as a thing in the minds of men and thus as an influence on human life. But, I think that restricting ourselves to this case has gargantuan ramifications for the definition of “moral” and drastically cuts the domain of objects on which moral ideas can act. It seems this domain can include only those which involve human beings in some fashion. If morality is exclusively a consequence of the history of human evolution and particular to our biology—and I do agree that it is—then I feel that I am bound by it only as far as my own biology has imprinted this moral sense upon me. If it is just biological and not possible to derive through application of reason, then, if I desire to make of myself a creature of reason alone, what care have I for it, but as a curiosity of anthropology?
I suspect that we agree, but that I took a bottom-up approach to get there and left the conclusion implicit, if present at all. All apologies.
Avoided in this post has been struggle with the word “morality” itself. I suspect we could write reams on that. If you think it worthwhile, we should, as the debate may be swung on the ability or inability to pin-down this notion.
(Note: As for SIAI, I think imprinting upon an AI human notions of moral judgments would be hideously dangerous for two reasons:
1) Human beings seem capable in almost every situation of overthrowing such judgments. If said AI is bound in similar manner, then what matters it for controlling or predicting its behavior?
2) If said AI is to possess a notion of justice and of a being who has abdicated certain rights due to immoral conduct, what will its judgment be of the humanity that has taught it morals? Can it not glance, not at history, but simply at the current state of the world and find immediately and with disgust ample grounds for the conclusion that very many humans have surrendered any claim to the moral life? It would be a strange moral algorithm if an AI did not come to this conclusion. Perhaps that is rather the point, as morality even among humans is a strange and often-blind algorithm.)
I agree with your basic point that moral intuitions reflect psychological realities, and that attempts to derive moral truths without explicitly referring to those realities will inevitably turn out to implicitly embed them.
That said, I think you might be introducing unnecessary confusion by talking about “subjective” and “individual.” To pick a simple and trivial objection, it might be that two people, by happenstance, share a set of moral intuitions, and those intuitions might include references to other people. For example, they might each believe “it is best to satisfy the needs of others,” or “it is best to believe things believed by the majority” or “it is best to believe things confirmed by experiment.” Indeed, hundreds of people might share those intuitions, either by happenstance or by mutual influence. In this case, the intuition would not be inter-subjective and non-individual, but still basically the kind of thing we’re talking about.
I assume you mean to contrast it with objective, global things like, say, gravity. Which is fine, but it gets tricky to say that precisely.
It seems this domain can include only those which involve human beings in some fashion.
Here, again, things get slippery. First, I can have moral intuitions about non-humans… for example, I can believe that it’s wrong to club cute widdle baby seals. Second, it’s not obvious that non-humans can’t have moral intuitions.
if I desire to make of myself a creature of reason alone, what care have I for it, but as a curiosity of anthropology?
If that is in fact your desire, then you haven’t a care for it. Or, indeed, for much of anything else.
Speaking personally, though, I would be loathe to give up my love of pie, despite acknowledging that it is a consequence of my own biology and history.
Agreed that imprinting an AI with human notions of moral judgments, especially doing so with the same loose binding to actual behavior humans demonstrate, would be relatively foolish. This is, of course, different from building an AI that is constrained to behave consistently with human moral intuitions.
Agreed that such an AI would easily conclude that humans are not bound by the same constraints that it is bound by. Whether this would elicit disgust or not depends on a lot of things. Sharks are not bound by my moral intuitions, but they don’t disgust me.
I think we might still be talking past each other, but here goes:
The reason I posit and emphasize a distinction between subjective judgments and those that are otherwise—I have a weak reason for not using the term “objective” here—is to highlight a particular feature of moral claims that is lacking, and in thus being lacked, weakens them. That is, I take a claim to be subjective if to hold it myself I must come upon it by chance. I cannot be brought to it through reason alone. It is an opinion or intuition that I cannot trace logically in my own thought, so I cannot communicate it to you by guiding you down the same line. The reason I think that this distinction matters, is that without this logical structure, it not possible for someone to bring me to experience the same intuition through reasoned argument or demonstration. Without this feature, morality must be an island state. This is ruinous, because morality inevitably and necessarily touches upon interactions between people. If it cannot do this, it cannot do much.
Perhaps we should come to common agreement, are at least agreed-upon disagreement on this point before we try other things.
Other Things:
I suspect—this is an idea I have only recently invented have not entirely examined—that any idea that is irrational needs must be essentially incommunicable. How could it be otherwise? If you can lay out the logic behind a thought and give support to its predicates carefully and patiently, and of course your logic is valid and your predicates sound, how can I not, if I am open to reason, not accept what you say as true? That is, if you can demonstrate your ideas as the logical consequences of some set of known truths, I must, because that is what logical consequence is, accept your ideas as true.
I have not witnessed with done with moral notions. Hence my doubt about there existence as rational ideas. I do not doubt that people have moral ideas, but I doubt that they can be communicated to people who have not already come upon them by chance, and who then can only be partially sure that you are of common mind.
Perhaps I can draw a parallel with the distinction between Greek and Babylonian mathematics. The difference between demonstration by proof and attempted demonstration by repeated example. The first (except to mathematicians of the subtle variety), if done properly, seems to be able, in its nature, to be powered to accomplish the goal of communication in every case. Can this be said of the latter type? I think only in the case when the examples given are logically structured so as to be a form of the first type.
"I agree with your basic point that moral intuitions reflect psychological realities, and that attempts to derive moral truths without explicitly referring to those realities will inevitably turn out to implicitly embed them."
I have not wanted to make this claim. What I am claiming is that this claim does appear, thus far, to hold water. However, absence of evidence is not evidence of absence, etc. etc. I am asking for someone to show me the light, as it were.
"First, I can have moral intuitions about non-humans... for example, I can believe that it's wrong to club cute widdle baby seals. Second, it's not obvious that non-humans can't have moral intuitions."
As for your first objection, have not you given precisely the sort of case I was talking about? The moral judgment stated is not about bears clubbing baby seals, it is about humans doing it! Clearly that does involve humans. Come up with a moral judgment about trees overusing carbon dioxide and you’ll have me pinned.
"If that is in fact your desire, then you haven't a care for it. Or, indeed, for much of anything else."
That is just silly, is it not? I must at least care for reason itself. The desire to be rational is a passion indeed. If I must be paradoxical at least that far, I will take it and move on. As for your love of pie, if it is really a consequence of your biology and history, then you CANNOT give it up. You cannot will yourself to unlove it, or it must thus not be the product of the aforesaid forces alone.
I am fairly sure that we aren’t talking past each other, I just disagree with you on some points. Just to try and clarify those points...
You seem to believe that a moral theory must, first and foremost, be compelling… if moral theory X does not convince others, then it can’t do much worth doing. I am not convinced of this. For example, working out my own moral theory in detail allows me to recognize situations that present moral choices, and identify the moral choices I endorse, more accurately… which lowers my chances of doing things that, if I understood better, I would reject. This seems worth doing, even if I’m the only person who ever subscribes to that theory.
You seem to believe that if moral theory X is not rationally compelling, then we cannot come to agree on the specific claims of X except by chance. I’m unconvinced of that. People come to agree on all kinds of things where there is a payoff to agreement, even where the choices themselves are arbitrary. Heck, people often agree on things that are demonstrably false.
Relatedly, you seem to believe that if X logically entails Y, then everyone in the world who endorses X necessarily endorses Y. I’d love to live in that world, but I see no evidence that I do. (That said, it’s possible that you are actually making a moral claim that having logically consistent beliefs is good, rather than a claim that people actually do have such beliefs. I’m inclined to agree with the former.)
I can have a moral intuition that bears clubbing baby seals is wrong, also. Now, I grant you that I, as a human, am less likely to have moral intuitions about things that don’t affect humans in any way… but my moral intuitions might nevertheless be expressible as a general principle which turns out to apply to non-humans as well.
You seem to believe that things I’m biologically predisposed to desire, I will necessarily desire. But lots of biological predispositions are influenced by local environment. My desire for pie may be stronger in some settings than others, and it may be brought lower than my desire for the absence of pie via a variety of mechanisms, and etc. Sure, maybe I can’t “will myself to unlove it,” but I have stronger tools available than unaided will, and we’re developing still-stronger tools every year.
I agree that the desire to be rational is a desire like any other. I intended “much of anything else” to denote an approximate absence of desire, not a complete one.
I think an important part of our disagreement, at least for me, is that you are interested in people generally and morality as it is now—at least your examples come from this set—while I am trying to restrict my inquiry to the most rational type of person, so that I can discover a morality that all rational people can be brought to through reason alone without need for error or chance. If such a morality does not exist among people generally, then I have no interest for the morality of people generally. To bring it up is a non sequitur in such a case.
I do not see that people coming to agree on things that are demonstrably false is a point against me. This fact is precisely why I am turned-off by the current state of ethical thought, as it seems infested with examples of this circumstance. I am not impressed by people who will agree to an intellectual point because it is convenient. I take truth first, at least that is the point of this inquiry.
I am asking a single question: Is there (or can we build) a morality that can be derived with logic from first principles that are obvious to everyone and require no Faith?
You’re right, I’m concerned with morality as it applies to people generally.
If you are exclusively concerned with sufficiently rational people, then we have indeed been talking past each other. Thanks for clarifying that.
As to your question: I submit that for that community, there are only two principles that matter:
Come to agreement with the rest of the community about how to best optimize your shared environment to satisfy your collective preferences.
Abide by that agreement as long as doing so is in the long-term best interests of everyone you care about.
...and the justification for those principles is fairly self-evident. Perhaps that isn’t a morality, but if it isn’t I’m not sure what use that community would have for a morality in the first place. So I say: either of course there is, or there’s no reason to care.
The specifics of that agreement will, of course, depend on the particular interests of the people involved, and will therefore change regularly. There’s no way to build that without actually knowing about the specific community at a specific point in time. But that’s just implementation. It’s like the difference between believing it’s right to not let someone die, and actually having the medical knowledge to save them.
That said, if this community is restricted to people who, as you implied earlier, care only for rationality, then the resulting agreement process is pretty simple. (If they invite people who also care for other things, it will get more complex.)
I am asking a single question: Is there (or can we build) a morality that can be derived with logic from first principles that are obvious to everyone and require no Faith?
I think it’s one of Yudkowsky’s better articles. (On a tangential note, I’m amused to find on re-reading it that I had almost the exact same reaction to The Golden Transcendence, though I had no conscious recollection of the connection when I got around to reading it myself.)
Moral intuitions demonstrably exist.
That is, many people demonstrably do endorse and reject certain kinds of situations in a way that we are inclined to categorize as a moral (rather than an aesthetic or arbitrary or pragmatic) judgment, and demonstrably do signal those endorsements and rejections to one another.
All of that behavior has demonstrable influences on how people are born and live and die, suffer and thrive, are educated and remain ignorant, discover truths and believe falsehoods, are happy and sad, etc.
I believe all of that stuff matters, so I believe how the moral intuitions that influence that stuff are formed, and how they can be influenced, is worth understanding.
And, of course, the SIAI folks believe it matters because they want to engineer an artificial system whose decisions have predictable relationships to our moral intuitions even when it doesn’t directly consult those intuitions.
Now, maybe none of that stuff is what you’re talking about… it’s hard to tell precisely what you are rejecting the study of, actually, so I may be talking past you.
If what you mean to reject is the study of morality as something that exists in the world outside of our minds and our behavior, for example, I agree with you.
I suspect the best way to encourage the rejection of that is to study the actual roots of our moral judgments; as more and more of our judgments can be rigorously explained, there will be less and less room for a “god of the moral gaps” to explain them.
And I agree with NihilCredo that the distinction between “applied morality” and “theoretical morality” is not a stable one—especially when considering large-scale engineering projects—so refusing to consider theoretical questions simply ensures that we’re unprepared for the future.
Also, thought experiments are often useful tools to clarify what our intuitions actually are.
I think we may indeed be talking past each other, so I will try to state my case more cogently.
I am not denying that people do possess ideas about something named “morality”. It would be absurd to claim otherwise, as we are here discussing such ideas.
I am denying that, even if I accept all of their assumptions, individuals who claim these ideas as more-than-subjective—by that I think I mean that they claim their ideas able to be applied to a group rather than only to one man, the holder of the ideas—can convince me that these ideas are not wholly subjective and individual-dependent.
If it is the case that morality is individual only, then that is an interesting conclusion and something to talk about, but it does seem, at least to a first approximation, that for a judgment to be considered moral, it must have some broader applicability among individuals, rather than concerning but one person. What can Justice be if it is among one man only? This seems a critical part of what is meant by “morality”. It is in this latter, broad case, that moral philosophy appears null.
If you possess an idea of morality desire that I consider it to have some connection with the world and with all persons—and surely I must require that it have such a connection, as moral claims attempt to dictate the interaction between people, and thus cannot be content to be contained in one mind alone—at least enough of a connection that you can, through reasoned argument, convince me that your claims are both valid and sound, then surely your ideas must make reference to principles that I can discover individually to both exist and serve as predicates to your ideas. If you cannot elucidate these foundations, then how can I be brought to your view through reason? This was the intent of my original criticism, to ask why these foundations are so lousy and to beg that someone make them otherwise if moral claims are to be made.
I think that this is the crux of my objection. I cannot find moral claims that I can be brought to accept through reason alone, as even in the most impressive cases such claims are deeply infected by subjective assumptions that are incommunicable and—dare I write it? --- irrational.
(This is to change the subject somewhat, but I find that the quality of an idea that allows it to be communicated is necessary to its being considered the result of reason and objective. I use that last word with 10,000 pounds of hesitation.)
However, and now I think that we are talking to each other directly, if, when you write of moral ideas, you refer only to those ideas that currently do exist, whether logically well-constructed or not, and you say that you are interested in studying these for their effects, then I am agreed.
I certainly agree that, whether I am convinced of its validity or use, morality does exist as a thing in the minds of men and thus as an influence on human life. But, I think that restricting ourselves to this case has gargantuan ramifications for the definition of “moral” and drastically cuts the domain of objects on which moral ideas can act. It seems this domain can include only those which involve human beings in some fashion. If morality is exclusively a consequence of the history of human evolution and particular to our biology—and I do agree that it is—then I feel that I am bound by it only as far as my own biology has imprinted this moral sense upon me. If it is just biological and not possible to derive through application of reason, then, if I desire to make of myself a creature of reason alone, what care have I for it, but as a curiosity of anthropology?
I suspect that we agree, but that I took a bottom-up approach to get there and left the conclusion implicit, if present at all. All apologies.
Avoided in this post has been struggle with the word “morality” itself. I suspect we could write reams on that. If you think it worthwhile, we should, as the debate may be swung on the ability or inability to pin-down this notion.
(Note: As for SIAI, I think imprinting upon an AI human notions of moral judgments would be hideously dangerous for two reasons: 1) Human beings seem capable in almost every situation of overthrowing such judgments. If said AI is bound in similar manner, then what matters it for controlling or predicting its behavior? 2) If said AI is to possess a notion of justice and of a being who has abdicated certain rights due to immoral conduct, what will its judgment be of the humanity that has taught it morals? Can it not glance, not at history, but simply at the current state of the world and find immediately and with disgust ample grounds for the conclusion that very many humans have surrendered any claim to the moral life? It would be a strange moral algorithm if an AI did not come to this conclusion. Perhaps that is rather the point, as morality even among humans is a strange and often-blind algorithm.)
I agree with your basic point that moral intuitions reflect psychological realities, and that attempts to derive moral truths without explicitly referring to those realities will inevitably turn out to implicitly embed them.
That said, I think you might be introducing unnecessary confusion by talking about “subjective” and “individual.” To pick a simple and trivial objection, it might be that two people, by happenstance, share a set of moral intuitions, and those intuitions might include references to other people. For example, they might each believe “it is best to satisfy the needs of others,” or “it is best to believe things believed by the majority” or “it is best to believe things confirmed by experiment.” Indeed, hundreds of people might share those intuitions, either by happenstance or by mutual influence. In this case, the intuition would not be inter-subjective and non-individual, but still basically the kind of thing we’re talking about.
I assume you mean to contrast it with objective, global things like, say, gravity. Which is fine, but it gets tricky to say that precisely.
Here, again, things get slippery. First, I can have moral intuitions about non-humans… for example, I can believe that it’s wrong to club cute widdle baby seals. Second, it’s not obvious that non-humans can’t have moral intuitions.
If that is in fact your desire, then you haven’t a care for it. Or, indeed, for much of anything else.
Speaking personally, though, I would be loathe to give up my love of pie, despite acknowledging that it is a consequence of my own biology and history.
Agreed that imprinting an AI with human notions of moral judgments, especially doing so with the same loose binding to actual behavior humans demonstrate, would be relatively foolish. This is, of course, different from building an AI that is constrained to behave consistently with human moral intuitions.
Agreed that such an AI would easily conclude that humans are not bound by the same constraints that it is bound by. Whether this would elicit disgust or not depends on a lot of things. Sharks are not bound by my moral intuitions, but they don’t disgust me.
I think we might still be talking past each other, but here goes:
The reason I posit and emphasize a distinction between subjective judgments and those that are otherwise—I have a weak reason for not using the term “objective” here—is to highlight a particular feature of moral claims that is lacking, and in thus being lacked, weakens them. That is, I take a claim to be subjective if to hold it myself I must come upon it by chance. I cannot be brought to it through reason alone. It is an opinion or intuition that I cannot trace logically in my own thought, so I cannot communicate it to you by guiding you down the same line. The reason I think that this distinction matters, is that without this logical structure, it not possible for someone to bring me to experience the same intuition through reasoned argument or demonstration. Without this feature, morality must be an island state. This is ruinous, because morality inevitably and necessarily touches upon interactions between people. If it cannot do this, it cannot do much.
Perhaps we should come to common agreement, are at least agreed-upon disagreement on this point before we try other things.
Other Things:
I suspect—this is an idea I have only recently invented have not entirely examined—that any idea that is irrational needs must be essentially incommunicable. How could it be otherwise? If you can lay out the logic behind a thought and give support to its predicates carefully and patiently, and of course your logic is valid and your predicates sound, how can I not, if I am open to reason, not accept what you say as true? That is, if you can demonstrate your ideas as the logical consequences of some set of known truths, I must, because that is what logical consequence is, accept your ideas as true.
I have not witnessed with done with moral notions. Hence my doubt about there existence as rational ideas. I do not doubt that people have moral ideas, but I doubt that they can be communicated to people who have not already come upon them by chance, and who then can only be partially sure that you are of common mind.
Perhaps I can draw a parallel with the distinction between Greek and Babylonian mathematics. The difference between demonstration by proof and attempted demonstration by repeated example. The first (except to mathematicians of the subtle variety), if done properly, seems to be able, in its nature, to be powered to accomplish the goal of communication in every case. Can this be said of the latter type? I think only in the case when the examples given are logically structured so as to be a form of the first type.
I have not wanted to make this claim. What I am claiming is that this claim does appear, thus far, to hold water. However, absence of evidence is not evidence of absence, etc. etc. I am asking for someone to show me the light, as it were.
As for your first objection, have not you given precisely the sort of case I was talking about? The moral judgment stated is not about bears clubbing baby seals, it is about humans doing it! Clearly that does involve humans. Come up with a moral judgment about trees overusing carbon dioxide and you’ll have me pinned.
That is just silly, is it not? I must at least care for reason itself. The desire to be rational is a passion indeed. If I must be paradoxical at least that far, I will take it and move on. As for your love of pie, if it is really a consequence of your biology and history, then you CANNOT give it up. You cannot will yourself to unlove it, or it must thus not be the product of the aforesaid forces alone.
I am fairly sure that we aren’t talking past each other, I just disagree with you on some points. Just to try and clarify those points...
You seem to believe that a moral theory must, first and foremost, be compelling… if moral theory X does not convince others, then it can’t do much worth doing. I am not convinced of this. For example, working out my own moral theory in detail allows me to recognize situations that present moral choices, and identify the moral choices I endorse, more accurately… which lowers my chances of doing things that, if I understood better, I would reject. This seems worth doing, even if I’m the only person who ever subscribes to that theory.
You seem to believe that if moral theory X is not rationally compelling, then we cannot come to agree on the specific claims of X except by chance. I’m unconvinced of that. People come to agree on all kinds of things where there is a payoff to agreement, even where the choices themselves are arbitrary. Heck, people often agree on things that are demonstrably false.
Relatedly, you seem to believe that if X logically entails Y, then everyone in the world who endorses X necessarily endorses Y. I’d love to live in that world, but I see no evidence that I do. (That said, it’s possible that you are actually making a moral claim that having logically consistent beliefs is good, rather than a claim that people actually do have such beliefs. I’m inclined to agree with the former.)
I can have a moral intuition that bears clubbing baby seals is wrong, also. Now, I grant you that I, as a human, am less likely to have moral intuitions about things that don’t affect humans in any way… but my moral intuitions might nevertheless be expressible as a general principle which turns out to apply to non-humans as well.
You seem to believe that things I’m biologically predisposed to desire, I will necessarily desire. But lots of biological predispositions are influenced by local environment. My desire for pie may be stronger in some settings than others, and it may be brought lower than my desire for the absence of pie via a variety of mechanisms, and etc. Sure, maybe I can’t “will myself to unlove it,” but I have stronger tools available than unaided will, and we’re developing still-stronger tools every year.
I agree that the desire to be rational is a desire like any other. I intended “much of anything else” to denote an approximate absence of desire, not a complete one.
I think an important part of our disagreement, at least for me, is that you are interested in people generally and morality as it is now—at least your examples come from this set—while I am trying to restrict my inquiry to the most rational type of person, so that I can discover a morality that all rational people can be brought to through reason alone without need for error or chance. If such a morality does not exist among people generally, then I have no interest for the morality of people generally. To bring it up is a non sequitur in such a case.
I do not see that people coming to agree on things that are demonstrably false is a point against me. This fact is precisely why I am turned-off by the current state of ethical thought, as it seems infested with examples of this circumstance. I am not impressed by people who will agree to an intellectual point because it is convenient. I take truth first, at least that is the point of this inquiry.
I am asking a single question: Is there (or can we build) a morality that can be derived with logic from first principles that are obvious to everyone and require no Faith?
You’re right, I’m concerned with morality as it applies to people generally.
If you are exclusively concerned with sufficiently rational people, then we have indeed been talking past each other. Thanks for clarifying that.
As to your question: I submit that for that community, there are only two principles that matter:
Come to agreement with the rest of the community about how to best optimize your shared environment to satisfy your collective preferences.
Abide by that agreement as long as doing so is in the long-term best interests of everyone you care about.
...and the justification for those principles is fairly self-evident. Perhaps that isn’t a morality, but if it isn’t I’m not sure what use that community would have for a morality in the first place. So I say: either of course there is, or there’s no reason to care.
The specifics of that agreement will, of course, depend on the particular interests of the people involved, and will therefore change regularly. There’s no way to build that without actually knowing about the specific community at a specific point in time. But that’s just implementation. It’s like the difference between believing it’s right to not let someone die, and actually having the medical knowledge to save them.
That said, if this community is restricted to people who, as you implied earlier, care only for rationality, then the resulting agreement process is pretty simple. (If they invite people who also care for other things, it will get more complex.)
Very well put.
Perhaps you’ve already encountered this, but your question calls to mind the following piece by Yudkowsky: No Universally Compelling Arguments, which is near the start of his broader metaethics sequence.
I think it’s one of Yudkowsky’s better articles.
(On a tangential note, I’m amused to find on re-reading it that I had almost the exact same reaction to The Golden Transcendence, though I had no conscious recollection of the connection when I got around to reading it myself.)
I agree vehemently with your comment.