Depends on the original AI’s value function. If it cares about humanity, or at least it’s own safety, then yes, making smarter AIs is not a convergent goal. But if it’s some kind of roboaccelerationist that has some goal like “maximize intelligence in the universe”, it will make smarter AIs even knowing that it means being paperclipped.
Depends on the original AI’s value function. If it cares about humanity, or at least it’s own safety, then yes, making smarter AIs is not a convergent goal. But if it’s some kind of roboaccelerationist that has some goal like “maximize intelligence in the universe”, it will make smarter AIs even knowing that it means being paperclipped.