Is there any reason to think this process will converge, rather than diverge more and more, as it has for all of history? If there is, it has not been articulated.
Future creatures will probably have bigger genomes, bigger sef-descriptions, and so bigger moralities—assuming, of course, that their morality refers to themselves. There might be practical limits on creature size—but these are probably large, leaving a lot of space for evolution in the mean time.
The idea that values will freeze arises out of an analysis of self-improving systems, that claims that agents will want to preserve their values (e.g. see Omohundro’s “Basic AI Drives”). In a competitive scenario, agents won’t get their way. So: folks imagine one big organism and self-directed evolution—and that it will get its way.
One reason for scepicism about this is the alien race. If our values freeze—and then we meet aliens—we would probably be assimilated. So—lacking confidence that aliens do not exist—we may decide to allow our values to grow—in order to better preserve at least some of them.
How does a self-improving system improve itself, without discovering contradictions or gaps in its values?
By getting a faster brain, more memory, more stored resources and a better world model, perhaps.
Values don’t have to have “contradictions” or “gaps” in them. Say you value printing out big prime numbers. Where are the contradictions or gaps going to come from?
Does value freeze require knowledge freeze?
Usually values and knowledge are considered to be orthogoonal—so “no”.
Future creatures will probably have bigger genomes, bigger sef-descriptions, and so bigger moralities—assuming, of course, that their morality refers to themselves. There might be practical limits on creature size—but these are probably large, leaving a lot of space for evolution in the mean time.
The idea that values will freeze arises out of an analysis of self-improving systems, that claims that agents will want to preserve their values (e.g. see Omohundro’s “Basic AI Drives”). In a competitive scenario, agents won’t get their way. So: folks imagine one big organism and self-directed evolution—and that it will get its way.
One reason for scepicism about this is the alien race. If our values freeze—and then we meet aliens—we would probably be assimilated. So—lacking confidence that aliens do not exist—we may decide to allow our values to grow—in order to better preserve at least some of them.
How does a self-improving system improve itself, without discovering contradictions or gaps in its values? Does value freeze require knowledge freeze?
By getting a faster brain, more memory, more stored resources and a better world model, perhaps.
Values don’t have to have “contradictions” or “gaps” in them. Say you value printing out big prime numbers. Where are the contradictions or gaps going to come from?
Usually values and knowledge are considered to be orthogoonal—so “no”.