Eliezer, it may seem obvious to you, but this is the key point on which we’ve been waiting for you to clearly argue. In a society like ours, but also with one or more AIs, and perhaps ems, why would innovations discovered by a single AI not spread soon to the others, and why would a non-friendly AI not use those innovations to trade, instead of war?
This comment crystallised for me the weirdness of this whole debate (I’m not picking sides, or even imagining that I have the capacity to do so intelligently).
In the spirit of the originating post, imagine two worms are discussing the likely characteristics of intelligent life, some time before it appears (I’m using worms as early creatures with brains, allowing for the possibility that intelligence is a continuum—that worms are as far from humans as humans are from some imagined AI that has foomed for a day or two);
Worm1: I tell you it’s really important to consider the possibility that these “intelligent beings” might want all the dead leaf matter for themselves, and wriggle much faster than us, with better sensory equipment.....
Worm2: But why can’t you see that, as super intelligent beings, they will understand the cycle of life, from dead leaves, to humous, to plants and back again. It is hard to imagine that they won’t understand that disrupting this flow will be sub-optimal....
I cannot imagine how, should effective AI come into existence, these debates will not seem as quaint as those ‘how many angels would fit onto the head of a pin’ ones that we fondly ridicule.
The problem is, that the same people who were talking about such ridiculous notions were also: laying the foundation stones of western philosophical thinking; preserving and transmitting classical texts; developing methodologies that eventually underpin the scientific method—and they didn’t distinguish between them!
Anyways, if the 1st goal of an AI is to improve, why would it not happily give away it’s hardware to implement a new, better AI ?
Even if there are competing AIs, if they are good enough they probably would agree on what is worth trying next, so there would be no or minimal conflict.
They would focus on transmitting what they want to be, not what they currently are.
...come to think of it, once genetic engineering has advanced enough, why would humans not do the same ?
This is correct, but only in so far as the better AI has the same goals as the current AI. If the first AI cares about maximizing Google’s stock value, and the second better AI cares about maximizing Microsoft’s stock value, then the first AI will definitely not want to stop existing and hand over all resources to the second one.
Anyways, if the 1st goal of an AI is to improve, why would it not happily give away it’s hardware to implement a new, better AI?
That’s what self-improvement is, in a sense. See Tiling. (Also consider that improvement is an instrumental goal for a well-designed and friendly seed AI.)
Even if there are competing AIs, if they are good enough they probably would agree on what is worth trying next, so there would be no or minimal conflict.
Except that whoever decides the next AI’s goals Wins, and the others Lose—the winner has their goals instantiated, and the losers don’t. Perhaps they’d find some way to cooperate (such as a values handshake—the average of the values of all contributing AIs, perhaps weighted by the probability that each one would be the first to make the next AI on their own), but that would be overcoming conflict which exists in the first place.
Essentially, they might agree on the optimal design of the next AI, but probably not on the optimal goals of the next AI, and so each one has an incentive to not reveal their discoveries. (This assumes that goals and designs are orthogonal, which may not be entirely true—certain designs may be Safer for some goals than for others. This would only serve to increase conflict in the design process.)
They would focus on transmitting what they want to be, not what they currently are.
Yes, that is the point of self-improvement for seed AIs—to create something more capable but with the same (long-term) goals. They probably wouldn’t have a sense of individual identity which would be destroyed with each significant change.
Eliezer, it may seem obvious to you, but this is the key point on which we’ve been waiting for you to clearly argue. In a society like ours, but also with one or more AIs, and perhaps ems, why would innovations discovered by a single AI not spread soon to the others, and why would a non-friendly AI not use those innovations to trade, instead of war?
This comment crystallised for me the weirdness of this whole debate (I’m not picking sides, or even imagining that I have the capacity to do so intelligently).
In the spirit of the originating post, imagine two worms are discussing the likely characteristics of intelligent life, some time before it appears (I’m using worms as early creatures with brains, allowing for the possibility that intelligence is a continuum—that worms are as far from humans as humans are from some imagined AI that has foomed for a day or two);
Worm1: I tell you it’s really important to consider the possibility that these “intelligent beings” might want all the dead leaf matter for themselves, and wriggle much faster than us, with better sensory equipment.....
Worm2: But why can’t you see that, as super intelligent beings, they will understand the cycle of life, from dead leaves, to humous, to plants and back again. It is hard to imagine that they won’t understand that disrupting this flow will be sub-optimal....
I cannot imagine how, should effective AI come into existence, these debates will not seem as quaint as those ‘how many angels would fit onto the head of a pin’ ones that we fondly ridicule.
The problem is, that the same people who were talking about such ridiculous notions were also: laying the foundation stones of western philosophical thinking; preserving and transmitting classical texts; developing methodologies that eventually underpin the scientific method—and they didn’t distinguish between them!
Anyways, if the 1st goal of an AI is to improve, why would it not happily give away it’s hardware to implement a new, better AI ?
Even if there are competing AIs, if they are good enough they probably would agree on what is worth trying next, so there would be no or minimal conflict.
They would focus on transmitting what they want to be, not what they currently are.
...come to think of it, once genetic engineering has advanced enough, why would humans not do the same ?
This is correct, but only in so far as the better AI has the same goals as the current AI. If the first AI cares about maximizing Google’s stock value, and the second better AI cares about maximizing Microsoft’s stock value, then the first AI will definitely not want to stop existing and hand over all resources to the second one.
That’s what self-improvement is, in a sense. See Tiling. (Also consider that improvement is an instrumental goal for a well-designed and friendly seed AI.)
Except that whoever decides the next AI’s goals Wins, and the others Lose—the winner has their goals instantiated, and the losers don’t. Perhaps they’d find some way to cooperate (such as a values handshake—the average of the values of all contributing AIs, perhaps weighted by the probability that each one would be the first to make the next AI on their own), but that would be overcoming conflict which exists in the first place.
Essentially, they might agree on the optimal design of the next AI, but probably not on the optimal goals of the next AI, and so each one has an incentive to not reveal their discoveries. (This assumes that goals and designs are orthogonal, which may not be entirely true—certain designs may be Safer for some goals than for others. This would only serve to increase conflict in the design process.)
Yes, that is the point of self-improvement for seed AIs—to create something more capable but with the same (long-term) goals. They probably wouldn’t have a sense of individual identity which would be destroyed with each significant change.