The recursive improvement still could occur on its own. If an AI thought that the optimal path to accomplishing a goal was to have a greater level of intelligence, which seems like it would be a fairly common situation, then the AI would start improving itself.
Similarly, if an AI thinks it could accomplish a task better if it had more resources, and decided that taking over the world was the best way to have access to those resources, then it would do so.
Similarly, if an AI thinks it could accomplish a task better if it had more resources, and decided that taking over the world was the best way to have access to those resources, then it would do so.
Accomplish a task better? The best way to access those resources? How does it decide what is better or best if you don’t tell it what it should do? What you want the AI to do could as well be to produce paperclips as slowly as possible and let them be consumed by humans. What would be better and best in this context and why would the AI decide that it means to take over the universe to figure that out? Why would it care to refine its goals, why would it care about efficiency or speed when those characteristics might or might not be part of its goals?
An artificial agent doesn’t have drives of any sort, it wouldn’t mind to be destroyed if you forgot to tell it what it means not to be destroyed and that it should care about it.
Maybe. At the moment, it seems rather underspecified—so there doesn’t seem to be much point in trying to predict its actions. If it just makes a few paperclips, in what respect is it a powerful superintelligence—rather than just a boring steel paperclip manufacturer?
Why would it care about efficiency or speed when those characteristics might or might not be part of its goals?
Well, I would assume that if someone designed an AI with goals, a preference for those goals being accomplished faster would also be included. And for the difficult problems that we would build an AI to solve, there is a non-negligible probability that the AI will decide that it could solve a problem faster with more resources.
The recursive improvement still could occur on its own. If an AI thought that the optimal path to accomplishing a goal was to have a greater level of intelligence, which seems like it would be a fairly common situation, then the AI would start improving itself.
Similarly, if an AI thinks it could accomplish a task better if it had more resources, and decided that taking over the world was the best way to have access to those resources, then it would do so.
Accomplish a task better? The best way to access those resources? How does it decide what is better or best if you don’t tell it what it should do? What you want the AI to do could as well be to produce paperclips as slowly as possible and let them be consumed by humans. What would be better and best in this context and why would the AI decide that it means to take over the universe to figure that out? Why would it care to refine its goals, why would it care about efficiency or speed when those characteristics might or might not be part of its goals?
An artificial agent doesn’t have drives of any sort, it wouldn’t mind to be destroyed if you forgot to tell it what it means not to be destroyed and that it should care about it.
As slowly as possible? That is 0 paperclips per second.
That is a do-nothing agent, not an intelligent agent. So: it makes a dreadful example.
As slowly as possible to meet demand with some specific confidence?
Maybe. At the moment, it seems rather underspecified—so there doesn’t seem to be much point in trying to predict its actions. If it just makes a few paperclips, in what respect is it a powerful superintelligence—rather than just a boring steel paperclip manufacturer?
Agreed.
Well, I would assume that if someone designed an AI with goals, a preference for those goals being accomplished faster would also be included. And for the difficult problems that we would build an AI to solve, there is a non-negligible probability that the AI will decide that it could solve a problem faster with more resources.