Choosing those goals is not something that rationality can help much with—the best it can do is try to identify where goals are not internally consistent.
I think rationality absolutely must confront the question of purpose, and head-on. How else are we to confront it? Shouldn’t we try to pin down and either discard or accept some version of “purpose,” as a sort of first instrumental rationality?
I mention objectivity because I don’t think you can have any useful ethics without some static measure of comparability, some goal, however loose, that each person can pursue. There’s little to discuss if you don’t, because “everything is permitted.” That said, I think ethics has to understand each person’s competence to self-govern. Your utility function is important to everyone, but nobody knows how to maximize your utility function better than you. Usually. Ethics also has to bend to reality, so the more “important” thing isn’t agreement on theoretical questions, but cooperation towards mutually-agreed goals. So I’m in substantial agreement with:
Morality is then the problem of developing a framework for resolving conflicts of interest in such a way that all the agents can accept the conflict resolution process as optimal.
And I would enjoy thoroughly a post on this topic.
I think rationality absolutely must confront the question of purpose, and head-on. How else are we to confront it? Shouldn’t we try to pin down and either discard or accept some version of “purpose,” as a sort of first instrumental rationality?
Why do you think it needs to be confronted? I know there are many things that I want (though some of them may be mutually exclusive when closely examined) and that there are many similarities between the things that I want and the things that other humans want. Sometimes we can cooperate and both benefit, in other cases our wants conflict. Most problems in the world seem to arise from conflicting goals, either internally or between different people. I’m primarily interested in rationality as a route to better meeting my own goals and to finding better resolutions to conflicts. I have no desire to change my goals except to the extent that they are mutually exclusive and there is a clear path to a more self consistent set of goals.
There’s little to discuss if you don’t, because “everything is permitted.”
To the extent that we share a common evolutionary history our goals as humans overlap to a sufficient extent that cooperation is beneficial more often than not. Even where goals conflict, there is mutual benefit to agreeing rules for conflict resolution such that not everything is permitted. It is in our collective interest not to permit murder, not because murder is ‘wrong’ in some abstract sense but simply because most of us can usually agree that we prefer to live in a society where murder is forbidden, even at the cost of giving up the ‘freedom’ to murder at will. That equilibrium can break down and I’m interested in ways to robustly maintain the ‘good’ equilibrium rather than the ‘bad’ equilibrium that has existed at certain times and in certain places in history. I don’t however feel the need to ‘prove’ that my underlying preference for preserving the lives of myself and my family and friends (and to a lesser extent humans in general) is a fundamental principle—I simply take it as a given.
Why do you think it needs to be confronted?
…
I don’t however feel the need to ‘prove’ that my underlying preference for preserving the lives of myself and my family and friends (and to a lesser extent humans in general) is a fundamental principle—I simply take it as a given.
I think it needs to be confronted because simply taking things as given leads to sloppy moral reasoning. Your preference for self-preservation seems to be an impulse like any other, no more profound than a preference for chocolate over vanilla. What needs to be confronted is what makes that preference significant, if anything. Why should a rationalist in all other things let himself be ruled by raw desire in the arena of deciding what is meaningful? Why not inquire, to be more sure of ourselves?
Most problems in the world seem to arise from conflicting goals, either internally or between different people. I’m primarily interested in rationality as a route to better meeting my own goals and to finding better resolutions to conflicts.
Again, this is the ultimately important part. Wherever the goals come from, we can cooperate and use politics to turn them into results that we all want. Further, we discipline ourselves so that our goals are clear and consistent. All I’m saying is that you may want to look into the basis of your own goals and systematize them to enhance clarity.
What needs to be confronted is what makes that preference significant, if anything. Why should a rationalist in all other things let himself be ruled by raw desire in the arena of deciding what is meaningful? Why not inquire, to be more sure of ourselves?
I’m very interested in those questions and have read a lot on evolutionary psychology and the evolutionary basis for our sense of morality. I feel I have a reasonably satisfactory explanation for the broad outlines of why we have many of the goals we do. My curiosity can itself be explained by the very forces that shaped the other goals I have. Based on my current understanding I don’t however see any reason to expect to find or to want to find a more fundamental basis for those preferences.
Our goals are what they are because they were the kind of goals that made our ancestors successful. They’re the kind of goals that lead to people like us with just those kind of goals… There doesn’t need to be anything more fundamental to morality. To try to explain our moral principles by appealing to more fundamental moral principles is to make the same kind of mistake as to try to explain complex entities with a more fundamental complex creator of those entities.
Wherever the goals come from, we can cooperate and use politics to turn them into results that we all want.
I think we are close. Do you think enjoyment and pain can be reduced to or defined in terms of preference? We have an explanation of preference in evolutionary psychology, but to my mind, a justification of its significance is necessary also. Clearly, we have evolved certain intuitive goals, but our consciousness requires us to take responsibility for them and modulate them through moral reasoning to accept realities beyond what our evolutionary sense of purpose is equipped for.
To me, preference is significant because it usually underlies the start of desirable cognitions or the end of undesirable ones, in me and other conscious things. The desirable cognitions should be maximized in the aggregate and the undesirable ones minimized. That is the whole hand-off from evolution to “objective” morality, from there, the faculties of rational discipline and the minimal framework of society take over. Is it too much?
Certainly close enough to hope to agree on a set of rules, if not completely on personal values/preferences.
We have an explanation of preference in evolutionary psychology, but to my mind, a justification of its significance is necessary also.
I don’t really recognize a distinction here. The explanation explains why preferences are their own justification in my view.
Clearly, we have evolved certain intuitive goals, but our consciousness requires us to take responsibility for them and modulate them through moral reasoning to accept realities beyond what our evolutionary sense of purpose is equipped for.
I think I at least partially agree—sometimes we should override our immediate moral intuitions in light of a deeper understanding of how following them would lead to worse long term consequences. This is what I mean when I talk about recognizing contradictions within our value system and consciously choosing priorities.
The desirable cognitions should be maximized in the aggregate and the undesirable ones minimized.
This looks like the utilitarian position and is where I would disagree to some extent. I don’t believe it’s necessary or desirable for individuals to prefer ‘aggregated’ utility. If forced to choose I will prefer outcomes that maximize utility for myself and my family and friends over those that maximize ‘aggregate’ utility. I believe that is perfectly moral and is a natural part of our value system. I am however happy to accept constraints that allow me to coexist peacefully with others who prefer different outcomes. Morality should be about how to set up a system that allows us to cooperate when we have an incentive to defect.
Ok, here is what I don’t agree with:
I think rationality absolutely must confront the question of purpose, and head-on. How else are we to confront it? Shouldn’t we try to pin down and either discard or accept some version of “purpose,” as a sort of first instrumental rationality?
I mention objectivity because I don’t think you can have any useful ethics without some static measure of comparability, some goal, however loose, that each person can pursue. There’s little to discuss if you don’t, because “everything is permitted.” That said, I think ethics has to understand each person’s competence to self-govern. Your utility function is important to everyone, but nobody knows how to maximize your utility function better than you. Usually. Ethics also has to bend to reality, so the more “important” thing isn’t agreement on theoretical questions, but cooperation towards mutually-agreed goals. So I’m in substantial agreement with:
And I would enjoy thoroughly a post on this topic.
Why do you think it needs to be confronted? I know there are many things that I want (though some of them may be mutually exclusive when closely examined) and that there are many similarities between the things that I want and the things that other humans want. Sometimes we can cooperate and both benefit, in other cases our wants conflict. Most problems in the world seem to arise from conflicting goals, either internally or between different people. I’m primarily interested in rationality as a route to better meeting my own goals and to finding better resolutions to conflicts. I have no desire to change my goals except to the extent that they are mutually exclusive and there is a clear path to a more self consistent set of goals.
To the extent that we share a common evolutionary history our goals as humans overlap to a sufficient extent that cooperation is beneficial more often than not. Even where goals conflict, there is mutual benefit to agreeing rules for conflict resolution such that not everything is permitted. It is in our collective interest not to permit murder, not because murder is ‘wrong’ in some abstract sense but simply because most of us can usually agree that we prefer to live in a society where murder is forbidden, even at the cost of giving up the ‘freedom’ to murder at will. That equilibrium can break down and I’m interested in ways to robustly maintain the ‘good’ equilibrium rather than the ‘bad’ equilibrium that has existed at certain times and in certain places in history. I don’t however feel the need to ‘prove’ that my underlying preference for preserving the lives of myself and my family and friends (and to a lesser extent humans in general) is a fundamental principle—I simply take it as a given.
I think it needs to be confronted because simply taking things as given leads to sloppy moral reasoning. Your preference for self-preservation seems to be an impulse like any other, no more profound than a preference for chocolate over vanilla. What needs to be confronted is what makes that preference significant, if anything. Why should a rationalist in all other things let himself be ruled by raw desire in the arena of deciding what is meaningful? Why not inquire, to be more sure of ourselves?
Again, this is the ultimately important part. Wherever the goals come from, we can cooperate and use politics to turn them into results that we all want. Further, we discipline ourselves so that our goals are clear and consistent. All I’m saying is that you may want to look into the basis of your own goals and systematize them to enhance clarity.
I’m very interested in those questions and have read a lot on evolutionary psychology and the evolutionary basis for our sense of morality. I feel I have a reasonably satisfactory explanation for the broad outlines of why we have many of the goals we do. My curiosity can itself be explained by the very forces that shaped the other goals I have. Based on my current understanding I don’t however see any reason to expect to find or to want to find a more fundamental basis for those preferences.
Our goals are what they are because they were the kind of goals that made our ancestors successful. They’re the kind of goals that lead to people like us with just those kind of goals… There doesn’t need to be anything more fundamental to morality. To try to explain our moral principles by appealing to more fundamental moral principles is to make the same kind of mistake as to try to explain complex entities with a more fundamental complex creator of those entities.
Hopefully we can all agree on that.
I think we are close. Do you think enjoyment and pain can be reduced to or defined in terms of preference? We have an explanation of preference in evolutionary psychology, but to my mind, a justification of its significance is necessary also. Clearly, we have evolved certain intuitive goals, but our consciousness requires us to take responsibility for them and modulate them through moral reasoning to accept realities beyond what our evolutionary sense of purpose is equipped for.
To me, preference is significant because it usually underlies the start of desirable cognitions or the end of undesirable ones, in me and other conscious things. The desirable cognitions should be maximized in the aggregate and the undesirable ones minimized. That is the whole hand-off from evolution to “objective” morality, from there, the faculties of rational discipline and the minimal framework of society take over. Is it too much?
Certainly close enough to hope to agree on a set of rules, if not completely on personal values/preferences.
I don’t really recognize a distinction here. The explanation explains why preferences are their own justification in my view.
I think I at least partially agree—sometimes we should override our immediate moral intuitions in light of a deeper understanding of how following them would lead to worse long term consequences. This is what I mean when I talk about recognizing contradictions within our value system and consciously choosing priorities.
This looks like the utilitarian position and is where I would disagree to some extent. I don’t believe it’s necessary or desirable for individuals to prefer ‘aggregated’ utility. If forced to choose I will prefer outcomes that maximize utility for myself and my family and friends over those that maximize ‘aggregate’ utility. I believe that is perfectly moral and is a natural part of our value system. I am however happy to accept constraints that allow me to coexist peacefully with others who prefer different outcomes. Morality should be about how to set up a system that allows us to cooperate when we have an incentive to defect.