So: game theory: reciprocity, kin selection/tag-based cooperation and virtue signalling.
As J. Storrs-Hall puts it in: “Intelligence Is Good”
There is but one good, namely, knowledge; and but one evil, namely ignorance.
—Socrates, from Diogenes Laertius’s Life of Socrates
As a matter of practical fact, criminality is strongly and negatively correlated with IQ in humans. The popular image of the tuxedo-wearing, suave jet-setter jewel thief to the contrary notwithstanding, almost all career criminals are of poor means as well as of lesser intelligence.”
Defecting typically ostracises you—and doesn’t make much sense in a smart society which can track repuations.
Evolution creates social species, though. Machines will be social too—their memetic relatedness might well be very high—an enormous win for kin selection-based theories based on shared memes. Of course they are evolving, and will evolve too—cultural evolution is still evolution.
So this presumes that the machines in question will evolve in social settings? That’s a pretty big assumption. Moreover, empirically speaking having in-group loyalty of that sort isn’t nearly enough to ensure that you are friendly with nearby entities- look at how many hunter-gatherer groups are in a state of almost constant war with their neighbors. The attitude towards other sentients (such as humans) isn’t going to be great even if there is some approximate moral attractor of that sort.
So this presumes that the machines in question will evolve in social settings? That’s a pretty big assumption.
I’m not sure what you mean. It presumes that there will be more than one machine. The ‘lumpiness’ of the universe is likely to produce natural boundaries. It seems to be a small assumption.
Moreover, empirically speaking having in-group loyalty of that sort isn’t nearly enough to ensure that you are friendly with nearby entities- look at how many hunter-gatherer groups are in a state of almost constant war with their neighbors.
Sure, but cultural evolution produces cooperation on a massive scale.
The attitude towards other sentients (such as humans) isn’t going to be great even if there is some approximate moral attractor of that sort.
Right—so: high morality seems to be reasonably compatible with some ant-squishing. The point here is about moral attractors—not the fate of humans.
I’m not sure what you mean. It presumes that there will be more than one machine. The ‘lumpiness’ of the universe is likely to produce natural boundaries. It seems to be a small assumption.
It is a major assumption. To use the most obvious issue if someone is starting up an attempted AGI on a single computer (say it is the only machine that has enough power) then this won’t happen. It also won’t happen if one isn’t having a large variety of machines which are actually engaging in generational copying. That means that say if one starts with ten slightly different machines, if the population doesn’t grow in distinct entities this isn’t going to do what you want. And if the entities lack a distinction between genotype and phenotype (as computer programs unlikely biological entities actually do) then this is also off because one will not be subject to a Darwinian system but rather a pseudo-Lamarckian one which doesn’t act the same way.
The point here is about moral attractors—not the fate of humans.
So your point seems to come down purely to the fact that evolved entities will do this, and a vague hope that people will deliberately put entities into this situation. This is both not helpful for the fundamental philosophical claim (which doesn’t care about what empirically is likely to happen) and is not practically helpful since there’s no good reason to think that any machine entities will actually be put into such a situation.
I’m not sure what you mean. It presumes that there will be more than one machine. The ‘lumpiness’ of the universe is likely to produce natural boundaries. It seems to be a small assumption.
It is a major assumption. To use the most obvious issue if someone is starting up an attempted AGI on a single computer (say it is the only machine that has enough power) then this won’t happen.
A multi-planetary living system is best described as being multiple agents, IMHO. The unity you suggest would represent relatedness approaching 1 - the ultimate win in terms of altruism and cooperation.
It also won’t happen if one isn’t having a large variety of machines which are actually engaging in generational copying.
Without copying there’s no life. Copying is unavoidable. Variation is practically ineviable too—for instance, local adaptation.
And if the entities lack a distinction between genotype and phenotype (as computer programs unlikely biological entities actually do) then this is also off because one will not be subject to a Darwinian system but rather a pseudo-Lamarckian one which doesn’t act the same way.
Computer programs do have the split between heredity and non heritble elements—which is the basic idea here, or it should be.
Darwin believed in cultural evolution: “The survival or preservation of certain favoured words in the struggle for existence is natural selection”—so surely cultural evolution is Darwinian.
Most of the game theory that underlies cooperation applies to both cultural and organic evolution. In particular, reciprocity, kin selection, and reputations apply in both domains.
So your point seems to come down purely to the fact that evolved entities will do this, and a vague hope that people will deliberately do so. This is both not helpful for the fundamental philosophical claim (which doesn’t care about what empirically is likely to happen) and is not practically helpful since there’s no good reason to think that any machine entities will actually be put into such a situation.
I didn’t follow that bit—though I can see that it sounds a bit negative.
Evolution has led to social, technological, intellectual and moral progress. It’s conservative to expect these trends to continue.
Attractors are features of evolutionary systems, it’d be wierd if their weren’t attractors in goal space. Here’s a paper which touches on that (I don’t necessarily buy all of it, but the part about morality as an attractor in goal systems of evolving cooperating game theoretic agents is interesting)
Attractors are features of evolutionary systems, it’d be wierd if their weren’t attractors in goal space.
Sure. Think about the optimal creature—for instance—and don’t anybody tell me that fitness is relative to the environment—we can see the environment.
Another point is that—even if there’s no competition (and natural selection) involving alien races, the fear of such competiton is likely produce a similar adaptive effect—moving effective values towards universal instrumental values.
Why do we have any reason to think this is the case?
So: game theory: reciprocity, kin selection/tag-based cooperation and virtue signalling.
As J. Storrs-Hall puts it in: “Intelligence Is Good”
Defecting typically ostracises you—and doesn’t make much sense in a smart society which can track repuations.
We already know about universal instrumental values. They illustrate what moral attractors look like.
I discussed this issue some more in Handicapped Superintelligence.
Doesn’t most of this amount to morality as an attractor for evolved social species?
Evolution creates social species, though. Machines will be social too—their memetic relatedness might well be very high—an enormous win for kin selection-based theories based on shared memes. Of course they are evolving, and will evolve too—cultural evolution is still evolution.
So this presumes that the machines in question will evolve in social settings? That’s a pretty big assumption. Moreover, empirically speaking having in-group loyalty of that sort isn’t nearly enough to ensure that you are friendly with nearby entities- look at how many hunter-gatherer groups are in a state of almost constant war with their neighbors. The attitude towards other sentients (such as humans) isn’t going to be great even if there is some approximate moral attractor of that sort.
I’m not sure what you mean. It presumes that there will be more than one machine. The ‘lumpiness’ of the universe is likely to produce natural boundaries. It seems to be a small assumption.
Sure, but cultural evolution produces cooperation on a massive scale.
Right—so: high morality seems to be reasonably compatible with some ant-squishing. The point here is about moral attractors—not the fate of humans.
It is a major assumption. To use the most obvious issue if someone is starting up an attempted AGI on a single computer (say it is the only machine that has enough power) then this won’t happen. It also won’t happen if one isn’t having a large variety of machines which are actually engaging in generational copying. That means that say if one starts with ten slightly different machines, if the population doesn’t grow in distinct entities this isn’t going to do what you want. And if the entities lack a distinction between genotype and phenotype (as computer programs unlikely biological entities actually do) then this is also off because one will not be subject to a Darwinian system but rather a pseudo-Lamarckian one which doesn’t act the same way.
So your point seems to come down purely to the fact that evolved entities will do this, and a vague hope that people will deliberately put entities into this situation. This is both not helpful for the fundamental philosophical claim (which doesn’t care about what empirically is likely to happen) and is not practically helpful since there’s no good reason to think that any machine entities will actually be put into such a situation.
A multi-planetary living system is best described as being multiple agents, IMHO. The unity you suggest would represent relatedness approaching 1 - the ultimate win in terms of altruism and cooperation.
Without copying there’s no life. Copying is unavoidable. Variation is practically ineviable too—for instance, local adaptation.
Computer programs do have the split between heredity and non heritble elements—which is the basic idea here, or it should be.
Darwin believed in cultural evolution: “The survival or preservation of certain favoured words in the struggle for existence is natural selection”—so surely cultural evolution is Darwinian.
Most of the game theory that underlies cooperation applies to both cultural and organic evolution. In particular, reciprocity, kin selection, and reputations apply in both domains.
I didn’t follow that bit—though I can see that it sounds a bit negative.
Evolution has led to social, technological, intellectual and moral progress. It’s conservative to expect these trends to continue.
Attractors are features of evolutionary systems, it’d be wierd if their weren’t attractors in goal space. Here’s a paper which touches on that (I don’t necessarily buy all of it, but the part about morality as an attractor in goal systems of evolving cooperating game theoretic agents is interesting)
Sure. Think about the optimal creature—for instance—and don’t anybody tell me that fitness is relative to the environment—we can see the environment.
Another point is that—even if there’s no competition (and natural selection) involving alien races, the fear of such competiton is likely produce a similar adaptive effect—moving effective values towards universal instrumental values.