This has interesting parallels to the Friendly AI problem. For example, one could posit that material wealth might somehow be a suitable arbiter, but I can imagine plenty of situations where C2 displaces C1 (Corporate lobbying?) followed by global ecological catastrophes. Here, dollars take the place of smiley faces strewn across the solar system. Maybe the problem of a sustainably benevolent truth-seeking group is somehow the same problem as FAI on some level?
Yes! The problem of Friendly Corporate Behavior is an urgent and unsolved one. (Indeed, corporations have many of the attributes of artificial intelligences, though of course not all.)
The sustainably benevolent moral group is not Friendly AI; it is Friendly NI (natural intelligence). The two problems are probably closely related, but I can see a few important differences: NIs had to evolve, so they’re going to start out optimized for reproduction. AIs are designed, so they’re optimized for whatever you optimize them for.
Not necessarily, because there’s no law saying that AIs have to die. This changes the evolutionary calculus significantly; you don’t need to reproduce if you can just keep existing and expand your power over the cosmos.
But you’re right, insofar as AIs that rapidly self-destruct and never reproduce are not going to stick around long. (I think this is actually a tautology, but it’s a tautology with the character of a mathematical theorem—definitely true, but not obvious or trivial.)
It’s also worth considering that there are different constraints between NIs and AIs though. NIs have to change gradually, piece by piece, gene by gene. AIs can be radically overhauled in a single generation. This gives them access to places on the fitness landscape that we could never reach—even places that are in fact evolutionarily stable once you get there.
Not necessarily, because there’s no law saying that AIs have to die. This changes the evolutionary calculus significantly; you don’t need to reproduce if you can just keep existing and expand your power over the cosmos.
As wedrifid pointed out, that depends on what one can do about the lightspeed limit. And thermodynamics. I don’t think not dying of old age changes evolution that much. Humans are prone to geriatric diseases because evolution can’t do much for us past the reproductive years. Beings without a lifespan won’t face that.
I highly doubt that no AI won’t ever destroy another, though.
It’s also worth considering that there are different constraints between NIs and AIs though. NIs have to change gradually, piece by piece, gene by gene. AIs can be radically overhauled in a single generation. This gives them access to places on the fitness landscape that we could never reach—even places that are in fact evolutionarily stable once you get there.
That just means that they’ll evolve without the constraints of genetics, much as designs and memes do.
I think it’s a mistake to treat superhuman AI as magic. In some contexts it will seem magic, but not all. Human habitations viewed from 10,000 meters look like growths of lichen. In some contexts, some dogs are “smarter” than some people. Human intelligence gives us a tremendous advantage over all other life on Earth, but it is not magic. Superhuman intelligence is not magic. It’s just intelligence.
you don’t need to reproduce if you can just keep existing and expand your power over the cosmos.
Apart from the practical lightspeed limitations. You do need to reproduce or in some other way split yourself into space-separated parts if you wish to expand your power over a sufficient distance.
Solving Friendliness involves capturing desirable ethical guidelines in a robust and sustainable way, so I’d expect the relationship between Friendliness and sustainably benevolent truth-seeking to depend a lot on the relationship between ethics and truth-seeking. I’d agree that they are thematically related, but very much non-identical.
This has interesting parallels to the Friendly AI problem. For example, one could posit that material wealth might somehow be a suitable arbiter, but I can imagine plenty of situations where C2 displaces C1 (Corporate lobbying?) followed by global ecological catastrophes. Here, dollars take the place of smiley faces strewn across the solar system. Maybe the problem of a sustainably benevolent truth-seeking group is somehow the same problem as FAI on some level?
Yes! The problem of Friendly Corporate Behavior is an urgent and unsolved one. (Indeed, corporations have many of the attributes of artificial intelligences, though of course not all.)
The sustainably benevolent moral group is not Friendly AI; it is Friendly NI (natural intelligence). The two problems are probably closely related, but I can see a few important differences: NIs had to evolve, so they’re going to start out optimized for reproduction. AIs are designed, so they’re optimized for whatever you optimize them for.
My prediction: The ones optimized for reproduction are the ones that will be around in the long term.
Not necessarily, because there’s no law saying that AIs have to die. This changes the evolutionary calculus significantly; you don’t need to reproduce if you can just keep existing and expand your power over the cosmos.
But you’re right, insofar as AIs that rapidly self-destruct and never reproduce are not going to stick around long. (I think this is actually a tautology, but it’s a tautology with the character of a mathematical theorem—definitely true, but not obvious or trivial.)
It’s also worth considering that there are different constraints between NIs and AIs though. NIs have to change gradually, piece by piece, gene by gene. AIs can be radically overhauled in a single generation. This gives them access to places on the fitness landscape that we could never reach—even places that are in fact evolutionarily stable once you get there.
As wedrifid pointed out, that depends on what one can do about the lightspeed limit. And thermodynamics. I don’t think not dying of old age changes evolution that much. Humans are prone to geriatric diseases because evolution can’t do much for us past the reproductive years. Beings without a lifespan won’t face that.
I highly doubt that no AI won’t ever destroy another, though.
That just means that they’ll evolve without the constraints of genetics, much as designs and memes do.
I think it’s a mistake to treat superhuman AI as magic. In some contexts it will seem magic, but not all. Human habitations viewed from 10,000 meters look like growths of lichen. In some contexts, some dogs are “smarter” than some people. Human intelligence gives us a tremendous advantage over all other life on Earth, but it is not magic. Superhuman intelligence is not magic. It’s just intelligence.
Apart from the practical lightspeed limitations. You do need to reproduce or in some other way split yourself into space-separated parts if you wish to expand your power over a sufficient distance.
One of our mind children might read this someday and think, “Distance? What a quaint idea!”
Solving Friendliness involves capturing desirable ethical guidelines in a robust and sustainable way, so I’d expect the relationship between Friendliness and sustainably benevolent truth-seeking to depend a lot on the relationship between ethics and truth-seeking. I’d agree that they are thematically related, but very much non-identical.