The reason why humans evolved charitable tendencies is because such tendencies served as marker to nearby humans that a given individual is a dependable ally. Those who expend their resources to help others are more likely than others to care about people in general and are therefore more likely than others to care about their companions.
Yes, exactly. So: why write a guide on how to give so as to help people—rather than a guide about how to appear to be caring and generous?
Presumably there is an audience interested in that topic—but what are their motivations? Are they in historically-unusual social circumstances where really helping others the most sends the most reliable signal to others that they care? Are they trying to distance themselves from base motives for good deeds as much as possible? Would that be to avoid having their motives exposed? - or to help placate their own consciences? In short: what gives?
The (evo psych) reason why humans evolved sexual tendencies presumably has something to do with reproduction. So why write guides on how to give and get sexual pleasure, rather than guides to fertility?
Presumably there is an audience sincerely interested in giving and getting sexual pleasure for its own sake. I doubt that this fact surprises you. So why do you pretend to be surprised that there are people who want to help the world for the sake of actually helping the world?
The analogy seems backwards: people who want to help for the sake of helping, as opposed to just feeling good, would be analogous to be people consciously interested in fertility, as opposed to just sexual pleasure.
(Since people of the latter type do exist, your point still holds, of course.)
Evolution gives us the “wet and tinglies” when we engage in sex because evolution wants us to reproduce. Some rational folks retarget the terminal value to simply having sex.
Evolution gives us the “warm and fuzzies” when we do good because evolution wants us to be seen as doing good. Some rational folks retarget the terminal value to simply doing good (whether seen or not).
There is nothing irrational about this retargeting. We are free agents. We can chose any terminal values that we can rationalize to ourselves. Retargetings like the two suggested here are the easiest, because they are minimally in conflict with our evolutionary programming.
Surely, “retargetting” their values is a deeply irrational act for almost any agent to perform—at least if we are talking about instrumental rationality. The reason being that your original goals are typically blatted by the retargetting—so rational agents should normally seek to avoid such an event happening to themselves—and should certainly not initiate it. Omohundro discusses the issue here:
We then show that self-improving systems will be driven to clarify their goals and represent them as economic utility functions. They will also strive for their actions to approximate rational economic behavior. This will lead almost all systems to protect their utility functions from modification and their utility measurement systems from corruption.
As for retargeting in general, the argument against it has always reminded me of the advice, “Never admit a mistake. It doesn’t really count as a mistake until you admit it.”
As for Omohundro’s paper, my reaction was negative from the first reading. His reasoning was so unconvincing that I found myself losing confidence in my judgements regarding things for which I had started out in agreement with him.
What would it mean for values to be mistaken, though? Who would be the judge of that?
The person who used to claim that he held a certain set of (not reflectively consistent) values, but who now understands that those values, which he used to hold, were a mistake.
I understand that there are ways of programming an AI so that its values will never change. But that does not mean that an AI must be programmed in that way, or even that it should be programmed in that way. And it definitely does not mean that rational humans cannot change their minds on their ultimate values.
I note that your example appears to generalise poorly. Yes, values can have bugs in them that need working out—but the idea that values are likely to be preserved by rational agents kicks in most seriously after that has happened.
Also, we had best be careful about making later agents the judges of their earlier selves. For each enlightenment, I expect we can find a corresponding conversion to a satanic cult.
FWIW, whether there are ways of making a powerful machine so that its values will never change is a still point of debate. Nobody has ever really proved that you can make a powerful self-improving system value anything other than its own measure of utility in the long term. Omohundro and Yudkowsky make hand-waving arguments about this—but they are not very convincing, IMHO. It would be delightful if we could demonstrate something useful about this question—but so far, nobody has.
Yes, values can have bugs in them that need working out—but the idea that values are likely to be preserved by rational agents kicks in most seriously after that has happened.
Please let me know when it happens.
To my mind, coming up with a set of terminal values which are reflectively consistent and satisfactory in every other way is at least as difficult and controversy-laden as coming up with a satisfactory axiomatization of set theory.
What do you think of the Axiom of Determinacy? I fully expect that my human values will be different from my trans-human values. 1 Corinthians 13:11
Your answer seems to suggest that the modern environment is not like the ancestral one—due to the effects of human culture—and that causes people to malfunction and behave maladaptively.
That is certainly one hypothesis to explain this type of behaviour. However, I can’t help notice that some indivduals have become famous moral philosophers by advocating this type of behaviour.
Weakening the analogy rather, more charity still seems to be for signalling purposes than sex is for reproductive purposes—making a guide to sexual pleasure less surprising. Also, I think “most” sex is supposed to support human pair bonding and signalling purposes—rather than reproduction directly—even in the ancestral environment—i.e. humans are rather like bonobos.
I expect that many who profess to actually helping the world do so at the expense of their own fitness. However, I doubt this is a simple case of brains being hijacked by deleterious memes through an inadequate memetic immune system. For instance, I figure some individuals are benefitting by spreading such memes around. So, I am interested in the details, to better understand what is happening.
You claimed I was “pretending to be surprised”—while what I was actually doing was asking questions. Your interpretation seems to presume dubious motives :-|
You claimed I was “pretending to be surprised”—while what I was actually doing was asking questions. Your interpretation seems to presume dubious motives :-|
Not dubious at all. I assumed you purpose was rhetorical. By feigning incomprehension of something carrying a stench of irrationality, you signal that you are pure in your rationalism. Surely you don’t believe that there is something dubious about signaling.
The other thing to say about this is: I don’t think helping strangers, or explaining to others how to help strangers—without any thought to what it might signal—is at all irrational.
I understand perfectly well, that for people with certain kinds of utilitarian goal systems, this kind of thing all makes perfect sense—and is absolutely the rational thing to do.
It is pretty strange that any such utilitarian people exist in the first place—but if we accept that axiomatically, things like the discussion on this thread follow—without any need for invoking irrationality.
It is more that I don’t think I was pretending at all. I did ask questions—but that doesn’t mean I was surprised by existence of an audience for the presentation.
I have some hypotheses about that (some of which I listed) - but I am not so certain of their relative merit that I don’t welcome input from others on the topic. Some of those involved clearly have quite a different perspective from me, and I am curious about what they think is happening.
Yes, exactly. So: why write a guide on how to give so as to help people—rather than a guide about how to appear to be caring and generous?
Presumably there is an audience interested in that topic—but what are their motivations? Are they in historically-unusual social circumstances where really helping others the most sends the most reliable signal to others that they care? Are they trying to distance themselves from base motives for good deeds as much as possible? Would that be to avoid having their motives exposed? - or to help placate their own consciences? In short: what gives?
The (evo psych) reason why humans evolved sexual tendencies presumably has something to do with reproduction. So why write guides on how to give and get sexual pleasure, rather than guides to fertility?
Presumably there is an audience sincerely interested in giving and getting sexual pleasure for its own sake. I doubt that this fact surprises you. So why do you pretend to be surprised that there are people who want to help the world for the sake of actually helping the world?
The analogy seems backwards: people who want to help for the sake of helping, as opposed to just feeling good, would be analogous to be people consciously interested in fertility, as opposed to just sexual pleasure.
(Since people of the latter type do exist, your point still holds, of course.)
Evolution gives us the “wet and tinglies” when we engage in sex because evolution wants us to reproduce. Some rational folks retarget the terminal value to simply having sex.
Evolution gives us the “warm and fuzzies” when we do good because evolution wants us to be seen as doing good. Some rational folks retarget the terminal value to simply doing good (whether seen or not).
There is nothing irrational about this retargeting. We are free agents. We can chose any terminal values that we can rationalize to ourselves. Retargetings like the two suggested here are the easiest, because they are minimally in conflict with our evolutionary programming.
Surely, “retargetting” their values is a deeply irrational act for almost any agent to perform—at least if we are talking about instrumental rationality. The reason being that your original goals are typically blatted by the retargetting—so rational agents should normally seek to avoid such an event happening to themselves—and should certainly not initiate it. Omohundro discusses the issue here:
http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/
As for retargeting in general, the argument against it has always reminded me of the advice, “Never admit a mistake. It doesn’t really count as a mistake until you admit it.”
As for Omohundro’s paper, my reaction was negative from the first reading. His reasoning was so unconvincing that I found myself losing confidence in my judgements regarding things for which I had started out in agreement with him.
What would it mean for values to be mistaken, though? Who would be the judge of that?
Normally, values are not right or wrong. Rather, “right” and “wrong” are value judgements.
The person who used to claim that he held a certain set of (not reflectively consistent) values, but who now understands that those values, which he used to hold, were a mistake.
I understand that there are ways of programming an AI so that its values will never change. But that does not mean that an AI must be programmed in that way, or even that it should be programmed in that way. And it definitely does not mean that rational humans cannot change their minds on their ultimate values.
I note that your example appears to generalise poorly. Yes, values can have bugs in them that need working out—but the idea that values are likely to be preserved by rational agents kicks in most seriously after that has happened.
Also, we had best be careful about making later agents the judges of their earlier selves. For each enlightenment, I expect we can find a corresponding conversion to a satanic cult.
FWIW, whether there are ways of making a powerful machine so that its values will never change is a still point of debate. Nobody has ever really proved that you can make a powerful self-improving system value anything other than its own measure of utility in the long term. Omohundro and Yudkowsky make hand-waving arguments about this—but they are not very convincing, IMHO. It would be delightful if we could demonstrate something useful about this question—but so far, nobody has.
Please let me know when it happens.
To my mind, coming up with a set of terminal values which are reflectively consistent and satisfactory in every other way is at least as difficult and controversy-laden as coming up with a satisfactory axiomatization of set theory.
What do you think of the Axiom of Determinacy? I fully expect that my human values will be different from my trans-human values. 1 Corinthians 13:11
It sounds like a poorly-specified problem—so perhaps don’t expect to solve that one.
As you may recall, I think that nature has its own maximand—namely entropy—and that the values of living things are just a manifestation of that.
Perplexed is drawing the analogy between the behaviours that are adaptive to DNA genes—vs those that are not—which seems pretty reasonable.
Your answer seems to suggest that the modern environment is not like the ancestral one—due to the effects of human culture—and that causes people to malfunction and behave maladaptively.
That is certainly one hypothesis to explain this type of behaviour. However, I can’t help notice that some indivduals have become famous moral philosophers by advocating this type of behaviour.
Weakening the analogy rather, more charity still seems to be for signalling purposes than sex is for reproductive purposes—making a guide to sexual pleasure less surprising. Also, I think “most” sex is supposed to support human pair bonding and signalling purposes—rather than reproduction directly—even in the ancestral environment—i.e. humans are rather like bonobos.
I expect that many who profess to actually helping the world do so at the expense of their own fitness. However, I doubt this is a simple case of brains being hijacked by deleterious memes through an inadequate memetic immune system. For instance, I figure some individuals are benefitting by spreading such memes around. So, I am interested in the details, to better understand what is happening.
You claimed I was “pretending to be surprised”—while what I was actually doing was asking questions. Your interpretation seems to presume dubious motives :-|
Not dubious at all. I assumed you purpose was rhetorical. By feigning incomprehension of something carrying a stench of irrationality, you signal that you are pure in your rationalism. Surely you don’t believe that there is something dubious about signaling.
The other thing to say about this is: I don’t think helping strangers, or explaining to others how to help strangers—without any thought to what it might signal—is at all irrational.
I understand perfectly well, that for people with certain kinds of utilitarian goal systems, this kind of thing all makes perfect sense—and is absolutely the rational thing to do.
It is pretty strange that any such utilitarian people exist in the first place—but if we accept that axiomatically, things like the discussion on this thread follow—without any need for invoking irrationality.
It is more that I don’t think I was pretending at all. I did ask questions—but that doesn’t mean I was surprised by existence of an audience for the presentation.
I have some hypotheses about that (some of which I listed) - but I am not so certain of their relative merit that I don’t welcome input from others on the topic. Some of those involved clearly have quite a different perspective from me, and I am curious about what they think is happening.