In my understanding of things rationality does not involve values and altruism is all about values. They are orthogonal.
LW as a community (for various historical reasons) has a mix of rationalists and effective altruists. That’s a characteristic of this particular community, not a feature of either rationalism or EA.
rationality does not involve values and altruism is all about values. They are orthogonal.
Effective altruism isn’t just being extra super altruistic, though. EA as currently practiced presupposes certain values, but its main insight isn’t value-driven: it’s that you can apply certain quantification techniques toward figuring out how to optimally implement your value system. For example, if an animal rights meta-charity used GiveWell-inspired methods to recommend groups advocating for veganism or protesting factory farming or rescuing kittens or something, I’d call that a form of EA despite the differences between its conception of utility and GiveWell’s.
Seen through that lens, effective altruism seems to have a lot more in common with LW-style rationality than, say, a preference for malaria eradication does by itself. We don’t dictate values, and (social pressure aside) we probably can’t talk people into EA if their value structure isn’t compatible with it, but we might easily make readers with the right value assumptions more open to quantified or counterintuitive methods.
Yes, LW exposure might easily dispose readers with those inclinations toward EA-like forms of political or religious advocacy, and if that’s all they’re doing then I wouldn’t call them effective altruists (though only because politics and religion are not generally considered forms of altruism). That doesn’t seem terribly relevant, though. Politics and religion are usually compatible with altruism, and nothing about effective altruism requires devotion solely to GiveWell-approved causes.
I’m really not sure what you’re trying to demonstrate here. Some people have values incompatible with EA’s assumptions? That’s true, but it only establishes the orthogonality of LW ideas with EA if everyone with compatible values was already an effective altruist, and that almost certainly isn’t the case. As far as I can tell there’s plenty of room for optimization.
(It does establish an upper bound, but EA’s market penetration, even after any possible LW influence, is nowhere near it.)
I’m really not sure what you’re trying to demonstrate here.
That rationality and altruism are orthogonal. That effective altruism is predominantly altruism and “effective” plays a second fiddle to it. That rationality does not imply altruism (in case you think it’s a strawman, tom_cr seems to claim exactly that).
If effective altruism was predominantly just altruism, we wouldn’t be seeing the kind of criticism of it from a traditionally philanthropic perspective that we have been. I see this as strong evidence that it’s something distinct, and therefore that it makes sense to talk about something like LW rationality methods bolstering it despite rationality’s silence on pure questions of values.
Yes, it’s just [a method of quantifying] effectiveness. But effectiveness in this context, approached in this particular manner, is more significant—and, perhaps more importantly, a lot less intuitive -- than I think you’re giving it credit for.
we wouldn’t be seeing the kind of criticism of it from a traditionally philanthropic perspective that we have been
I don’t know about that. First, EA is competition for the limited resource, the donors’ money, and even worse, EA keeps on telling others that they are doing it wrong. Second, the idea that charity money should be spend in effective ways is pretty uncontroversial. I suspect (that’s my prior adjustable by evidence) that most of the criticism is aimed at specific recommendations of GiveWell and others, not at the concept of being getting more bang for your buck.
Take a look at Bill Gates. He is explicitly concerned with the effectiveness and impact of his charity spending—to the degree that he decided to bypass most established nonprofits and set up his own operation. Is he a “traditional” or an “effective” altruist? I don’t know.
and, perhaps more importantly, a lot less intuitive
Yes, I grant you that. Traditional charity tends to rely on purely emotional appeals. But I don’t know if that’s enough to push EA into a separate category of its own.
rationality does not involve values and altruism is all about values
Rationality itself does not involve values. But a human who learns rationality already has those values, and rationality can help them understand those values better, decompartmentalize, and optimize more efficiently.
But a human who learns rationality already has those values, and rationality can help them understand those values better, decompartmentalize, and optimize more efficiently.
So? Let’s say I value cleansing the Earth of untermenschen. Rationality can indeed help me achieve my goals and “optimize more efficiently”. Once you start associating rationality with sets of values, I don’t see how can you associate it with only “nice” values like altruism, but not “bad” ones like genocide.
Maybe, but at least they’ll be campaigning for mandatory genetic screening for genetic disorders rather than kill people of some arbitrary ethnicity they happened to fixate on.
May I ask you, what is it you are trying to achieve by being rational? Where does the motivation come from come from?
Or to put it another way, if it is rational to do something one way but not another, where does the difference derive from?
In my view, rationality is use of soundly reliable procedures for achieving one’s goals. Rationality is 100% about values. Altruism (depending how you define it) is a subset of rationality as long as the social contract is useful (ie nearly all the time).
Altruism (depending how you define it) is a subset of rationality as long as the social contract is useful (ie nearly all the time).
Basing altruism on contractarianism is very different from basing altruism on empathy. For one thing, the results may be different (one might reasonably conclude that we, here in the United States or wherever, have no implicit social contract with the residents of e.g. Nigeria). For another, it’s one level removed from terminal values, whereas empathy is not, so it’s a different sort of reasoning, and not easily comparable.
(btw, I also think there’s a basic misunderstanding happening here, but I’ll let Lumifer address it, if he likes.)
Yes, non-rational (perhaps empathy-based) altruism is possible. This is connected to the point I made elsewhere that consequentialism does not axiomatically depend on others having value.
empathy is not [one level removed from terminal values]
Not sure what you mean here. Empathy may be a gazillion levels removed from the terminal level. Experiencing an emotion does not guarantee that that emotion is a faithful representation of a true value held. Otherwise “do exactly as you feel immediately inclined, at all times,” would be all we needed to know about morality.
Altruism is also about implementing your goals (via the agency of the social contract), so rationality and altruism (depending how you define it) are not orthogonal.
Lets define altruism as being nice to other people. Lets describe the social contract as a mutually held belief that being nice to other people improves society. If this belief is useful, then being nice to other people is useful, i.e furthers one’s goals, i.e. it is rational. I know this is simplistic, but it should be more than enough to make my point.
Perhaps you interpret altruism to be being nice in a way that is not self serving. But then, there can be no sense in which altruism could be effective or non-effective. (And also your initial reasoning that “rationality does not involve values and altruism is all about values” would be doubly wrong.)
Lets define altruism as being nice to other people. Lets describe the social contract as a mutually held belief that being nice to other people improves society.
Let’s define things the way they are generally understood or at least close to it. You didn’t make your point.
I understand altruism, generally speaking, as valuing the welfare of strangers so that you’re willing to attempt to increase it at some cost to yourself. I understand social contract as a contract, a set of mutual obligations (in particular, it’s not a belief).
If altruism entails a cost to the self, then your claim that altruism is all about values seems false. I assumed we are using similar enough definitions of altruism to understand each other.
We can treat the social contract as a belief, a fact, an obligation, or goodness knows what, but it won’t affect my argument. If the social contract requires being nice to people, and if the social contract is useful, then there are often cases when being nice is rational.
Furthermore, being nice in a way the exposes me to undue risk is bad for society (the social contract entails shared values, so such behaviour would also expose others to risk), so under the social contract, cases where being nice is not rational do not really exist.
Thus, if I implement the belief / obligation / fact of the social contract, and that is useful, then being nice is rational.
If altruism entails a cost to the self, then your claim that altruism is all about values seems false
Why does it seem false? It is about values, in particular the relationship between the value “welfare of strangers” and the value “resources I have”.
If the social contract requires being nice to people
It does not. The social contract requires you not to infringe upon the rights of other people and that’s a different thing. Maybe you can treat it as requiring being polite to people. I don’t see it as requiring being nice to people.
Furthermore, being nice in a way the exposes me to undue risk is bad for society (the social contract entails shared values, so such behaviour would also expose others to risk), so under the social contract, cases where being nice is not rational do not really exist.
I think we have a pretty major disagreement about that :-/
If welfare of strangers is something you value, then it is not a net cost.
Yes, there is an old-fashioned definition of altruism that assumes the action must be non-self-serving, but this doesn’t match common contemporary usage (terms like effective altruism and reciprocal altruism would be meaningless), doesn’t match your usage, and is based on a gross misunderstanding of how morality comes about (if written about this misunderstanding here—see section 4, “Honesty as meta-virtue,” for the most relevant part).
Under that old, confused definition, yes, altruism can not be rational (but not orthogonal to rationality—we could still try to measure how irrational any given altruistic act is, each act still sits somewhere on the scale of rationality).
It does not.
You seem very confident of that. Utterly bizarre, though, that you claim that not infringing on people’s rights is not part of being nice to people.
But the social contract demands much more than just not infringing on people’s rights. (By the way, where do those right come from?) We must actively seek each other out, trade (even if it’s only trade in ideas, like now), and cooperate (this discussion wouldn’t be possible without certain adopted codes of conduct ).
The social contract enables specialization in society, and therefore complex technology. This works through our ability to make and maintain agreements and cooperation. If you know how to make screws, and I want screws, the social contract enables you to convincingly promise to hand over screws if I give you some special bits of paper. If I don’t trust you for some reason, then the agreement breaks down. You lose income, I lose the screws I need for my factory employing 500 people, we all go bust. Your knowledge of how to make screws and my expertise in making screw drivers now counts for nothing, and everybody is screwed.
We help maintain trust by being nice to each other outside our direct trading. Furthermore, by being nice to people in trouble who we have never before met, we enhance a culture of trust that people in trouble will be helped out. We therefore increase the chances that people will help us out next time we end up in the shit. Much more importantly, we reduce a major source of people’s fears. Social cohesion goes up, cooperation increases, and people are more free to take risks in new technologies and / or economic ventures: society gets better, and we derive personal benefit from that.
I think we have a pretty major disagreement about that :-/
The social contract is a technology that entangles the values of different people (there are biological mechanisms that do that as well). Generally, my life is better when the lives of people around me are better. If your screw factory goes bust, then I’m negatively affected. If my neighbour lives in terror, then who knows what he might do out of fear—I am at risk. If everybody was scared about where their next meal was coming from, then I would never leave the house for fear that what food I have would be stolen in my absence—economics collapses. Because we have this entangled utility function, what’s bad for others is bad for me (in expectation), and what’s bad for me is bad for everybody else. For the most part, then, any self defeating behaviour (e.g. irrational attempts to be nice to others) is bad for society, and, in the long run, doesn’t help anybody.
If welfare of strangers is something you value, then it is not a net cost.
Having a particular value cannot have a cost. Values start to have costs only when they are realized or implemented.
Costlessly increasing the welfare of strangers doesn’t sound like altruism to me. Let’s say we start telling people “Say yes and magically a hundred lives will be saved in Chad. Nothing is required of you but to say ‘yes’.” How many people will say “yes”? I bet almost everyone. And we will be suspicious of those who do not—they would look like sociopaths to us. That doesn’t mean that we should call everyone but sociopaths is an altruist—you can, of course, define altruism that way but at this point the concept becomes diluted into meaninglessness.
We continue to have major disagreements about the social contract, but that’s a big discussion that should probably go off into a separate thread if you want to pursue it.
Values start to have costs only when they are realized or implemented.
How? Are you saying that I might hold legitimate value in something, but be worse off if I get it?
Costlessly increasing the welfare of strangers doesn’t sound like altruism to me.
OK, so we are having a dictionary writers’ dispute—one I don’t especially care to continue. So every place I used ‘altruism,’ substitute ‘being decent’ or ‘being a good egg,’ or whatever. (Please check, though, that your usage is somewhat consistent.)
But your initial claim (the one that I initially challenged) was that rationality has nothing to do with value, and is manifestly false.
I don’t think we understand each other. We start from different points, ascribe different meaning to the same words, and think in different frameworks. I think you’re much confused and no doubt you think the same of me.
The social contract enables specialization in society, and therefore complex technology. This works through our ability to make and maintain agreements and cooperation. If you know how to make screws, and I want screws, the social contract enables you to convincingly promise to hand over screws if I give you some special bits of paper. If I don’t trust you for some reason, then the agreement breaks down.
Either you’re using a broader definition of the social contract than I’m familiar with, or you’re giving it too much credit. The model I know with provides (one mechanism for) the legitimacy of a government or legal system, and therefore of the legal rights it establishes including an expectation of enforcement; but you don’t need it to have media of exchange, nor cooperation between individuals, nor specialization. At most it might make these more scalable.
And of course there are models that deny the existence of a social contract entirely, but that’s a little off topic.
If you look closely, I think you should find that legitimacy of government & legal systems comes from the same mechanism as everything I talked about.
You don’t need it to have media of exchange, nor cooperation between individuals, nor specialization
Actually, the whole point of governments and legal systems (legitimate ones) is to encourage cooperation between individuals, so that’s a bit of a weird comment. (Where do you think the legitimacy comes from?) And specialization trivially depends upon cooperation.
Yes, these things can exist to a small degree in a post-apocalyptic chaos, but they will not exactly flourish. (That’s why we call it post-apocalyptic chaos.) But the extent to which these things can exist is a measure of how well the social contract flourishes. Don’t get too hung up on exactly, precisely what ‘social contract’ means, it’s only a crude metaphor. (There is no actual bit of paper anywhere.)
I may not be blameless, in terms clearly explaining my position, but I’m sensing that a lot of people on this forum just plain dislike my views, without bothering to take the time to consider them honestly.
Actually, the whole point of governments and legal systems [...] is to encourage cooperation between individuals [...] And specialization trivially depends upon cooperation.
I have my quibbles with the social contract theory of government, but my main objection here isn’t to the theory itself, but that you’re attributing features to it that it clearly isn’t responsible for. You don’t need post-apocalyptic chaos to find situations that social contracts don’t cover: for example, there is no social contract on the international stage (pre-superpower, if you’d prefer), but nations still specialize and make alliances and transfer value.
The point of government (and therefore the social contract, if you buy that theory of legitimacy) is to facilitate cooperation. You seem to be suggesting that it enables it, which is a different and much stronger claim.
I think that international relations is a simple extension of social-contract-like considerations.
If nations cooperate, it is because it is believed to be in their interest to do so. Social-contract-like considerations form the basis for that belief. (The social contract is simply that which makes it useful to cooperate.) “Clearly isn’t responsible for,” is a phrase you should be careful before using.
You seem to be suggesting that [government] enables [cooperation]
I guess you mean that I’m saying cooperation is impossible without government. I didn’t say that. Government is a form of cooperation. Albeit a highly sophisticated one, and a very powerful facilitator.
I have my quibbles with the social contract theory of government
I appreciate your frankness. I’m curious, do you have an alternative view of how government derives legitimacy? What is it that makes the rules and structure of society useful? Or do you think that government has no legitimacy?
If nations cooperate, it is because it is believed to be in their interest to do so. Social-contract-like considerations form the basis for that belief. (The social contract is simply that which makes it useful to cooperate.)
The social contract, according to Hobbes and its later proponents, is the implicit deal that citizens (and, at a logical extension, other subordinate entities) make with their governments, trading off some of their freedom of action for greater security and potentially the maintenance of certain rights. That implies some higher authority with compelling powers of enforcement, and there’s no such thing in international relations; it’s been described (indeed, by Hobbes himself) as a formalized anarchy. Using the phrase to describe the motives for cooperation in such a state extends it far beyond its original sense, and IMO beyond usefulness.
There are however other reasons to cooperate: status, self-enforced codes of ethics, enlightened self-interest. It’s these that dominate in international relations, which is why I brought that up.
In my understanding of things rationality does not involve values and altruism is all about values. They are orthogonal.
LW as a community (for various historical reasons) has a mix of rationalists and effective altruists. That’s a characteristic of this particular community, not a feature of either rationalism or EA.
Effective altruism isn’t just being extra super altruistic, though. EA as currently practiced presupposes certain values, but its main insight isn’t value-driven: it’s that you can apply certain quantification techniques toward figuring out how to optimally implement your value system. For example, if an animal rights meta-charity used GiveWell-inspired methods to recommend groups advocating for veganism or protesting factory farming or rescuing kittens or something, I’d call that a form of EA despite the differences between its conception of utility and GiveWell’s.
Seen through that lens, effective altruism seems to have a lot more in common with LW-style rationality than, say, a preference for malaria eradication does by itself. We don’t dictate values, and (social pressure aside) we probably can’t talk people into EA if their value structure isn’t compatible with it, but we might easily make readers with the right value assumptions more open to quantified or counterintuitive methods.
Yes, of course.
So does effective proselytizing, for example. Or effective political propaganda.
Take away the “presupposed values” and all you are left with is effectiveness.
Yes, LW exposure might easily dispose readers with those inclinations toward EA-like forms of political or religious advocacy, and if that’s all they’re doing then I wouldn’t call them effective altruists (though only because politics and religion are not generally considered forms of altruism). That doesn’t seem terribly relevant, though. Politics and religion are usually compatible with altruism, and nothing about effective altruism requires devotion solely to GiveWell-approved causes.
I’m really not sure what you’re trying to demonstrate here. Some people have values incompatible with EA’s assumptions? That’s true, but it only establishes the orthogonality of LW ideas with EA if everyone with compatible values was already an effective altruist, and that almost certainly isn’t the case. As far as I can tell there’s plenty of room for optimization.
(It does establish an upper bound, but EA’s market penetration, even after any possible LW influence, is nowhere near it.)
That rationality and altruism are orthogonal. That effective altruism is predominantly altruism and “effective” plays a second fiddle to it. That rationality does not imply altruism (in case you think it’s a strawman, tom_cr seems to claim exactly that).
If effective altruism was predominantly just altruism, we wouldn’t be seeing the kind of criticism of it from a traditionally philanthropic perspective that we have been. I see this as strong evidence that it’s something distinct, and therefore that it makes sense to talk about something like LW rationality methods bolstering it despite rationality’s silence on pure questions of values.
Yes, it’s just [a method of quantifying] effectiveness. But effectiveness in this context, approached in this particular manner, is more significant—and, perhaps more importantly, a lot less intuitive -- than I think you’re giving it credit for.
I don’t know about that. First, EA is competition for the limited resource, the donors’ money, and even worse, EA keeps on telling others that they are doing it wrong. Second, the idea that charity money should be spend in effective ways is pretty uncontroversial. I suspect (that’s my prior adjustable by evidence) that most of the criticism is aimed at specific recommendations of GiveWell and others, not at the concept of being getting more bang for your buck.
Take a look at Bill Gates. He is explicitly concerned with the effectiveness and impact of his charity spending—to the degree that he decided to bypass most established nonprofits and set up his own operation. Is he a “traditional” or an “effective” altruist? I don’t know.
Yes, I grant you that. Traditional charity tends to rely on purely emotional appeals. But I don’t know if that’s enough to push EA into a separate category of its own.
Rationality itself does not involve values. But a human who learns rationality already has those values, and rationality can help them understand those values better, decompartmentalize, and optimize more efficiently.
So? Let’s say I value cleansing the Earth of untermenschen. Rationality can indeed help me achieve my goals and “optimize more efficiently”. Once you start associating rationality with sets of values, I don’t see how can you associate it with only “nice” values like altruism, but not “bad” ones like genocide.
Maybe, but at least they’ll be campaigning for mandatory genetic screening for genetic disorders rather than kill people of some arbitrary ethnicity they happened to fixate on.
Because there’s a large set of “nice” values that most of humanity shares.
Along with a large set of “not so nice” values that most of humanity shares as well. A glance at history should suffice to demonstrate that.
I think one of the lessons from history is that we can still massacre each other even when everyone is acting in good faith.
Yikes!
May I ask you, what is it you are trying to achieve by being rational? Where does the motivation come from come from?
Or to put it another way, if it is rational to do something one way but not another, where does the difference derive from?
In my view, rationality is use of soundly reliable procedures for achieving one’s goals. Rationality is 100% about values. Altruism (depending how you define it) is a subset of rationality as long as the social contract is useful (ie nearly all the time).
Basing altruism on contractarianism is very different from basing altruism on empathy. For one thing, the results may be different (one might reasonably conclude that we, here in the United States or wherever, have no implicit social contract with the residents of e.g. Nigeria). For another, it’s one level removed from terminal values, whereas empathy is not, so it’s a different sort of reasoning, and not easily comparable.
(btw, I also think there’s a basic misunderstanding happening here, but I’ll let Lumifer address it, if he likes.)
Yes, non-rational (perhaps empathy-based) altruism is possible. This is connected to the point I made elsewhere that consequentialism does not axiomatically depend on others having value.
Not sure what you mean here. Empathy may be a gazillion levels removed from the terminal level. Experiencing an emotion does not guarantee that that emotion is a faithful representation of a true value held. Otherwise “do exactly as you feel immediately inclined, at all times,” would be all we needed to know about morality.
Yes. Any goals.
No. Rationality is about implementing your values, whatever they happen to be.
An interesting claim :-) Want to unroll it?
That’s what I meant.
Altruism is also about implementing your goals (via the agency of the social contract), so rationality and altruism (depending how you define it) are not orthogonal.
Lets define altruism as being nice to other people. Lets describe the social contract as a mutually held belief that being nice to other people improves society. If this belief is useful, then being nice to other people is useful, i.e furthers one’s goals, i.e. it is rational. I know this is simplistic, but it should be more than enough to make my point.
Perhaps you interpret altruism to be being nice in a way that is not self serving. But then, there can be no sense in which altruism could be effective or non-effective. (And also your initial reasoning that “rationality does not involve values and altruism is all about values” would be doubly wrong.)
Let’s define things the way they are generally understood or at least close to it. You didn’t make your point.
I understand altruism, generally speaking, as valuing the welfare of strangers so that you’re willing to attempt to increase it at some cost to yourself. I understand social contract as a contract, a set of mutual obligations (in particular, it’s not a belief).
Apologies if my point wasn’t clear.
If altruism entails a cost to the self, then your claim that altruism is all about values seems false. I assumed we are using similar enough definitions of altruism to understand each other.
We can treat the social contract as a belief, a fact, an obligation, or goodness knows what, but it won’t affect my argument. If the social contract requires being nice to people, and if the social contract is useful, then there are often cases when being nice is rational.
Furthermore, being nice in a way the exposes me to undue risk is bad for society (the social contract entails shared values, so such behaviour would also expose others to risk), so under the social contract, cases where being nice is not rational do not really exist.
Thus, if I implement the belief / obligation / fact of the social contract, and that is useful, then being nice is rational.
Why does it seem false? It is about values, in particular the relationship between the value “welfare of strangers” and the value “resources I have”.
It does not. The social contract requires you not to infringe upon the rights of other people and that’s a different thing. Maybe you can treat it as requiring being polite to people. I don’t see it as requiring being nice to people.
I think we have a pretty major disagreement about that :-/
If welfare of strangers is something you value, then it is not a net cost.
Yes, there is an old-fashioned definition of altruism that assumes the action must be non-self-serving, but this doesn’t match common contemporary usage (terms like effective altruism and reciprocal altruism would be meaningless), doesn’t match your usage, and is based on a gross misunderstanding of how morality comes about (if written about this misunderstanding here—see section 4, “Honesty as meta-virtue,” for the most relevant part).
Under that old, confused definition, yes, altruism can not be rational (but not orthogonal to rationality—we could still try to measure how irrational any given altruistic act is, each act still sits somewhere on the scale of rationality).
You seem very confident of that. Utterly bizarre, though, that you claim that not infringing on people’s rights is not part of being nice to people.
But the social contract demands much more than just not infringing on people’s rights. (By the way, where do those right come from?) We must actively seek each other out, trade (even if it’s only trade in ideas, like now), and cooperate (this discussion wouldn’t be possible without certain adopted codes of conduct ).
The social contract enables specialization in society, and therefore complex technology. This works through our ability to make and maintain agreements and cooperation. If you know how to make screws, and I want screws, the social contract enables you to convincingly promise to hand over screws if I give you some special bits of paper. If I don’t trust you for some reason, then the agreement breaks down. You lose income, I lose the screws I need for my factory employing 500 people, we all go bust. Your knowledge of how to make screws and my expertise in making screw drivers now counts for nothing, and everybody is screwed.
We help maintain trust by being nice to each other outside our direct trading. Furthermore, by being nice to people in trouble who we have never before met, we enhance a culture of trust that people in trouble will be helped out. We therefore increase the chances that people will help us out next time we end up in the shit. Much more importantly, we reduce a major source of people’s fears. Social cohesion goes up, cooperation increases, and people are more free to take risks in new technologies and / or economic ventures: society gets better, and we derive personal benefit from that.
The social contract is a technology that entangles the values of different people (there are biological mechanisms that do that as well). Generally, my life is better when the lives of people around me are better. If your screw factory goes bust, then I’m negatively affected. If my neighbour lives in terror, then who knows what he might do out of fear—I am at risk. If everybody was scared about where their next meal was coming from, then I would never leave the house for fear that what food I have would be stolen in my absence—economics collapses. Because we have this entangled utility function, what’s bad for others is bad for me (in expectation), and what’s bad for me is bad for everybody else. For the most part, then, any self defeating behaviour (e.g. irrational attempts to be nice to others) is bad for society, and, in the long run, doesn’t help anybody.
I hope this helps.
Having a particular value cannot have a cost. Values start to have costs only when they are realized or implemented.
Costlessly increasing the welfare of strangers doesn’t sound like altruism to me. Let’s say we start telling people “Say yes and magically a hundred lives will be saved in Chad. Nothing is required of you but to say ‘yes’.” How many people will say “yes”? I bet almost everyone. And we will be suspicious of those who do not—they would look like sociopaths to us. That doesn’t mean that we should call everyone but sociopaths is an altruist—you can, of course, define altruism that way but at this point the concept becomes diluted into meaninglessness.
We continue to have major disagreements about the social contract, but that’s a big discussion that should probably go off into a separate thread if you want to pursue it.
How? Are you saying that I might hold legitimate value in something, but be worse off if I get it?
OK, so we are having a dictionary writers’ dispute—one I don’t especially care to continue. So every place I used ‘altruism,’ substitute ‘being decent’ or ‘being a good egg,’ or whatever. (Please check, though, that your usage is somewhat consistent.)
But your initial claim (the one that I initially challenged) was that rationality has nothing to do with value, and is manifestly false.
I don’t think we understand each other. We start from different points, ascribe different meaning to the same words, and think in different frameworks. I think you’re much confused and no doubt you think the same of me.
Either you’re using a broader definition of the social contract than I’m familiar with, or you’re giving it too much credit. The model I know with provides (one mechanism for) the legitimacy of a government or legal system, and therefore of the legal rights it establishes including an expectation of enforcement; but you don’t need it to have media of exchange, nor cooperation between individuals, nor specialization. At most it might make these more scalable.
And of course there are models that deny the existence of a social contract entirely, but that’s a little off topic.
If you look closely, I think you should find that legitimacy of government & legal systems comes from the same mechanism as everything I talked about.
Actually, the whole point of governments and legal systems (legitimate ones) is to encourage cooperation between individuals, so that’s a bit of a weird comment. (Where do you think the legitimacy comes from?) And specialization trivially depends upon cooperation.
Yes, these things can exist to a small degree in a post-apocalyptic chaos, but they will not exactly flourish. (That’s why we call it post-apocalyptic chaos.) But the extent to which these things can exist is a measure of how well the social contract flourishes. Don’t get too hung up on exactly, precisely what ‘social contract’ means, it’s only a crude metaphor. (There is no actual bit of paper anywhere.)
I may not be blameless, in terms clearly explaining my position, but I’m sensing that a lot of people on this forum just plain dislike my views, without bothering to take the time to consider them honestly.
I have my quibbles with the social contract theory of government, but my main objection here isn’t to the theory itself, but that you’re attributing features to it that it clearly isn’t responsible for. You don’t need post-apocalyptic chaos to find situations that social contracts don’t cover: for example, there is no social contract on the international stage (pre-superpower, if you’d prefer), but nations still specialize and make alliances and transfer value.
The point of government (and therefore the social contract, if you buy that theory of legitimacy) is to facilitate cooperation. You seem to be suggesting that it enables it, which is a different and much stronger claim.
I think that international relations is a simple extension of social-contract-like considerations.
If nations cooperate, it is because it is believed to be in their interest to do so. Social-contract-like considerations form the basis for that belief. (The social contract is simply that which makes it useful to cooperate.) “Clearly isn’t responsible for,” is a phrase you should be careful before using.
I guess you mean that I’m saying cooperation is impossible without government. I didn’t say that. Government is a form of cooperation. Albeit a highly sophisticated one, and a very powerful facilitator.
I appreciate your frankness. I’m curious, do you have an alternative view of how government derives legitimacy? What is it that makes the rules and structure of society useful? Or do you think that government has no legitimacy?
The social contract, according to Hobbes and its later proponents, is the implicit deal that citizens (and, at a logical extension, other subordinate entities) make with their governments, trading off some of their freedom of action for greater security and potentially the maintenance of certain rights. That implies some higher authority with compelling powers of enforcement, and there’s no such thing in international relations; it’s been described (indeed, by Hobbes himself) as a formalized anarchy. Using the phrase to describe the motives for cooperation in such a state extends it far beyond its original sense, and IMO beyond usefulness.
There are however other reasons to cooperate: status, self-enforced codes of ethics, enlightened self-interest. It’s these that dominate in international relations, which is why I brought that up.