I view AGI in an unusual way. I really don’t think it will be conscious or think in very unusual ways outside of its parameters. I think it will be much more of a tool, a problem-solving machine that can spit out a solution to any problem. To be honest, I imagine that one person or small organization will develop AGI and almost instantly ascend into (relative) godhood. They will develop an AI that can take over the internet, do so, and then calmly organize things as they see fit.
GPT-3, DALLE-E 2, Google Translate… these are all very much human-operated tools rather than self-aware agents. Honestly, I don’t see a particular advantage to building a self-aware agent. To me, AGI is just a generalizable system that can solve any problem you present it with. The wielder of the system is in charge of alignment. It’s like if you had DALL-E 2 20 years ago… what do you ask it to draw? It doesn’t have any reason to expand itself outside of its computer (maybe for more processing power? that seems like an unusual leap). You could probably draw some great deepfakes of world leaders and that wouldn’t be aligned with humanity, but the human is still in charge. The only problem would be asking it something like “an image designed to crash the human visual system” and getting an output that doesn’t align with what you actually wanted, because you are now in a coma.
So, I see AGI as more of a tool than a self-aware agent. A tool that can do anything, but not one that acts on its own.
I’m new to this site, but I’d love some feedback (especially if I’m totally wrong).
I view AGI in an unusual way. I really don’t think it will be conscious or think in very unusual ways outside of its parameters. I think it will be much more of a tool, a problem-solving machine that can spit out a solution to any problem. To be honest, I imagine that one person or small organization will develop AGI and almost instantly ascend into (relative) godhood. They will develop an AI that can take over the internet, do so, and then calmly organize things as they see fit.
GPT-3, DALLE-E 2, Google Translate… these are all very much human-operated tools rather than self-aware agents. Honestly, I don’t see a particular advantage to building a self-aware agent. To me, AGI is just a generalizable system that can solve any problem you present it with. The wielder of the system is in charge of alignment. It’s like if you had DALL-E 2 20 years ago… what do you ask it to draw? It doesn’t have any reason to expand itself outside of its computer (maybe for more processing power? that seems like an unusual leap). You could probably draw some great deepfakes of world leaders and that wouldn’t be aligned with humanity, but the human is still in charge. The only problem would be asking it something like “an image designed to crash the human visual system” and getting an output that doesn’t align with what you actually wanted, because you are now in a coma.
So, I see AGI as more of a tool than a self-aware agent. A tool that can do anything, but not one that acts on its own.
I’m new to this site, but I’d love some feedback (especially if I’m totally wrong).
-Soareverix
You might be interested in the gwern essay Why Tool AIs Want to Be Agent AIs.
Appreciate it! Checking this out now