Whatever goal(s) the AI has, it knows in the future it’ll be trying to make those goals happen, and that those future attempts will be more effective if it’s smarter.
This isn’t true. You can adjust the strength of modern chess software. There are many reasons for why an AI is not going to attempt to become as intelligent as possible. But the most important reason is that it won’t care if you don’t make it care.
The intuitive continuation to that, is that I think AIs will find self-improvement to be be cheap and easy, at least until it’s well above human level.
I am seriously unable to see how anyone could come to believe this.
This isn’t true. You can adjust the strength of modern chess software. There are many reasons for why an AI is not going to attempt to become as intelligent as possible. But the most important reason is that it won’t care if you don’t make it care.
This isn’t true. You can adjust the strength of modern chess software. There are many reasons for why an AI is not going to attempt to become as intelligent as possible. But the most important reason is that it won’t care if you don’t make it care.
I am seriously unable to see how anyone could come to believe this.
You are confused and uninformed. Please read up on instrumental values.