Anyways, if the 1st goal of an AI is to improve, why would it not happily give away it’s hardware to implement a new, better AI ?
Even if there are competing AIs, if they are good enough they probably would agree on what is worth trying next, so there would be no or minimal conflict.
They would focus on transmitting what they want to be, not what they currently are.
...come to think of it, once genetic engineering has advanced enough, why would humans not do the same ?
This is correct, but only in so far as the better AI has the same goals as the current AI. If the first AI cares about maximizing Google’s stock value, and the second better AI cares about maximizing Microsoft’s stock value, then the first AI will definitely not want to stop existing and hand over all resources to the second one.
Anyways, if the 1st goal of an AI is to improve, why would it not happily give away it’s hardware to implement a new, better AI?
That’s what self-improvement is, in a sense. See Tiling. (Also consider that improvement is an instrumental goal for a well-designed and friendly seed AI.)
Even if there are competing AIs, if they are good enough they probably would agree on what is worth trying next, so there would be no or minimal conflict.
Except that whoever decides the next AI’s goals Wins, and the others Lose—the winner has their goals instantiated, and the losers don’t. Perhaps they’d find some way to cooperate (such as a values handshake—the average of the values of all contributing AIs, perhaps weighted by the probability that each one would be the first to make the next AI on their own), but that would be overcoming conflict which exists in the first place.
Essentially, they might agree on the optimal design of the next AI, but probably not on the optimal goals of the next AI, and so each one has an incentive to not reveal their discoveries. (This assumes that goals and designs are orthogonal, which may not be entirely true—certain designs may be Safer for some goals than for others. This would only serve to increase conflict in the design process.)
They would focus on transmitting what they want to be, not what they currently are.
Yes, that is the point of self-improvement for seed AIs—to create something more capable but with the same (long-term) goals. They probably wouldn’t have a sense of individual identity which would be destroyed with each significant change.
Anyways, if the 1st goal of an AI is to improve, why would it not happily give away it’s hardware to implement a new, better AI ?
Even if there are competing AIs, if they are good enough they probably would agree on what is worth trying next, so there would be no or minimal conflict.
They would focus on transmitting what they want to be, not what they currently are.
...come to think of it, once genetic engineering has advanced enough, why would humans not do the same ?
This is correct, but only in so far as the better AI has the same goals as the current AI. If the first AI cares about maximizing Google’s stock value, and the second better AI cares about maximizing Microsoft’s stock value, then the first AI will definitely not want to stop existing and hand over all resources to the second one.
That’s what self-improvement is, in a sense. See Tiling. (Also consider that improvement is an instrumental goal for a well-designed and friendly seed AI.)
Except that whoever decides the next AI’s goals Wins, and the others Lose—the winner has their goals instantiated, and the losers don’t. Perhaps they’d find some way to cooperate (such as a values handshake—the average of the values of all contributing AIs, perhaps weighted by the probability that each one would be the first to make the next AI on their own), but that would be overcoming conflict which exists in the first place.
Essentially, they might agree on the optimal design of the next AI, but probably not on the optimal goals of the next AI, and so each one has an incentive to not reveal their discoveries. (This assumes that goals and designs are orthogonal, which may not be entirely true—certain designs may be Safer for some goals than for others. This would only serve to increase conflict in the design process.)
Yes, that is the point of self-improvement for seed AIs—to create something more capable but with the same (long-term) goals. They probably wouldn’t have a sense of individual identity which would be destroyed with each significant change.