Based on “Why Tool AIs Want to Be Agent AI’s” by Gwern, I would expect an AGI level GPT-6 to self improve and become a world gobbling AI.
The moment it gets a hint that it could answer better by getting (unknown bit of data from the Internet, extra memory, some other resource), the software’s own utility function will push the machine in that direction.
OK, but in this case I’m trying to imagine something that’s not significantly smarter than humans. So it probably can’t think of any self-improvement ideas that an AI scientist wouldn’t have thought of already, and even if it did, it wouldn’t have the ability to implement them without first getting access to huge supercomputers to re-train itself. Right?
Based on “Why Tool AIs Want to Be Agent AI’s” by Gwern, I would expect an AGI level GPT-6 to self improve and become a world gobbling AI.
The moment it gets a hint that it could answer better by getting (unknown bit of data from the Internet, extra memory, some other resource), the software’s own utility function will push the machine in that direction.
OK, but in this case I’m trying to imagine something that’s not significantly smarter than humans. So it probably can’t think of any self-improvement ideas that an AI scientist wouldn’t have thought of already, and even if it did, it wouldn’t have the ability to implement them without first getting access to huge supercomputers to re-train itself. Right?
I worry that I’m splitting hairs now because it seems that the AI only needs to be clever enough to generate the following in response to a query :
The answer to your question will be provided more quickly if you provide 1 GB of RAM. (rinse and repeat until we get to an AI box)