Why do you think that an AI would need to make billions of sequential self-modifications when humans don’t need to?
For starters, humans aren’t able to make changes as easily as an AI can. We don’t have direct access to our source code that we can change effortlessly, any change we make costs either time, money, or both.
That doesn’t address the question. It says that an AI could more easily make self-modifications. It doesn’t suggest that an AI needs to make such self-modifications. Human intelligence is an existence proof that human-level intelligence does not require “billions of sequential self-modifications”. Whether greater than human intelligence requires it, in fact whether greater than human intelligence is even possible, is still an open question.
So I reiterate, “Why do you think that an AI would need to make billions of sequential self-modifications when humans don’t need to?”
Human intelligence is an existence proof that human-level intelligence does not require “billions of sequential self-modifications”. Whether greater than human intelligence requires it, in fact whether greater than human intelligence is even possible, is still an open question.
Why do you think that an AI would need to make billions of sequential self-modifications when humans don’t need to?
Human intelligence required billions of sequential modifications (though not selfmodifications). An AI in general would not need self-modifications, but for a AGI it seems that it would be necessary. I don’t doubt a formal reasoning for the latter statement has been written by someone smarter than me before, but a very informal argument would be something like this:
If an AGI doesn’t need to self-modify, then that AGI is already perfect (or close enough that it couldn’t possibly matter). Since practically no software humans ever built was ever perfect in all respects, that seems exceedingly unlikely. Therefore, the first AGI would (very likely) need to be modified. Of course, at the begining it might be modified by humans (thus, not selfmodified), but the point of building AGI is to make it smarter than us. Thus, once it is smarter than us by a certain amount, it wouldn’t make sense for us (stupider intellects) to improve it (smarter intellect). Thus, it would need to self-modify, and do it a lot, unless by some ridiculously fortuitous accident of math (a) human intelligence is very close to the ideal, or (b) human intelligence will build something very close to the ideal on the first try.
It would be nice if those modifications would be things that are good for us, even if we can’t understand them.
For starters, humans aren’t able to make changes as easily as an AI can. We don’t have direct access to our source code that we can change effortlessly, any change we make costs either time, money, or both.
That doesn’t address the question. It says that an AI could more easily make self-modifications. It doesn’t suggest that an AI needs to make such self-modifications. Human intelligence is an existence proof that human-level intelligence does not require “billions of sequential self-modifications”. Whether greater than human intelligence requires it, in fact whether greater than human intelligence is even possible, is still an open question.
So I reiterate, “Why do you think that an AI would need to make billions of sequential self-modifications when humans don’t need to?”
Human intelligence required billions of sequential modifications (though not selfmodifications). An AI in general would not need self-modifications, but for a AGI it seems that it would be necessary. I don’t doubt a formal reasoning for the latter statement has been written by someone smarter than me before, but a very informal argument would be something like this:
If an AGI doesn’t need to self-modify, then that AGI is already perfect (or close enough that it couldn’t possibly matter). Since practically no software humans ever built was ever perfect in all respects, that seems exceedingly unlikely. Therefore, the first AGI would (very likely) need to be modified. Of course, at the begining it might be modified by humans (thus, not selfmodified), but the point of building AGI is to make it smarter than us. Thus, once it is smarter than us by a certain amount, it wouldn’t make sense for us (stupider intellects) to improve it (smarter intellect). Thus, it would need to self-modify, and do it a lot, unless by some ridiculously fortuitous accident of math (a) human intelligence is very close to the ideal, or (b) human intelligence will build something very close to the ideal on the first try.
It would be nice if those modifications would be things that are good for us, even if we can’t understand them.