I think Robin’s implied suggestion—to not be so quick to discard the option of building an AI that can improve itself in certain ways but not to the point of needing to hardcode something like Coherent Extrapolated Volition. Is it really impossible to make an AI that can become “smarter” in useful ways (including by modifying its own source code, if you like), without it ever needing to take decisions itself that have severe nonlocal effects? If intelligence is an optimization process, perhaps we can choose more carefully what is being optimized until we are intelligent enough to go further.
I suppose one answer is that other people are on the verge of building AIs with unlimited powers so there is no time to be thinking about limiting goals and powers and initiative. I don’t believe it, but if true we really are hosed.
It seems to me that if reasoning leads us to conclude that building self-improving AIs is a million-to-one shot to not destroy the world, we could consider not doing it. Find another way.
I think Robin’s implied suggestion—to not be so quick to discard the option of building an AI that can improve itself in certain ways but not to the point of needing to hardcode something like Coherent Extrapolated Volition. Is it really impossible to make an AI that can become “smarter” in useful ways (including by modifying its own source code, if you like), without it ever needing to take decisions itself that have severe nonlocal effects? If intelligence is an optimization process, perhaps we can choose more carefully what is being optimized until we are intelligent enough to go further.
I suppose one answer is that other people are on the verge of building AIs with unlimited powers so there is no time to be thinking about limiting goals and powers and initiative. I don’t believe it, but if true we really are hosed.
It seems to me that if reasoning leads us to conclude that building self-improving AIs is a million-to-one shot to not destroy the world, we could consider not doing it. Find another way.