Epistemic status. I am just a regular person who follows the space, and this is just my hunch based on a few days of musing on long walks. You should not update on this. I just wanted to put my thoughts out there, and if it generates discussion, all the better.
If I were in charge, my hand would be on the fire alarm right now.
I can already envision ways we could use current, public facing technology to create AGI. I would be surprised if no one did it in the next 5-8 years, even with no advancement in the constituent parts. I would almost hesitate to propose my solution for fear of accelerating us towards doom, but the fruit is so low-hanging that I’m either wrong, or others already have the same idea.
Imagine this. Hook ChatGPT up to an image recognition system that describes visual input in real time, and another for audio. Have ChatGPT parse the most relevant information and store it in a database. The naming of file and folders can be done by ChatGPT. When some stimulus prompts GPT, it can search for possible related files in memory and load them into the context window. You could potentially also do this with low-res, labeled video and images. Finally, you’d have the main thread on GPT be able to use a certain syntax to take real actions, like speaking, moving animatronics, or taking actions in a terminal.
Obviously it’s not as simple as I have made it out to be. There is a lot of handwaving in the explanation above. The phrase “hook up” is doing a lot of work, and GPTChat in its current form would need a lot of finetuning, or maybe outright retraining. Perhaps one GPT thread wouldn’t be enough and many would have to be incorporated to handle different parts of the process. Maybe creating a way to coordinate all of this is too big a challenge. That said, on a gut level, I simply no longer believe that it’s out of reach. If OpenAI released a weak, but completely general AI in the next two years, my only shock would be that it didn’t Foom before we got to see it.
Clearly I am a layperson. I do not understand the challenges, or even the viability of my proposal. That said, I would be interested to see if others are of a similar mind, or can easily disavow me of my beliefs.
Cheers all.
Oh, and please delete this post or ask me to if it poses any sort of epistemic risk.
I Believe we are in a Hardware Overhang
Epistemic status. I am just a regular person who follows the space, and this is just my hunch based on a few days of musing on long walks. You should not update on this. I just wanted to put my thoughts out there, and if it generates discussion, all the better.
If I were in charge, my hand would be on the fire alarm right now.
I can already envision ways we could use current, public facing technology to create AGI. I would be surprised if no one did it in the next 5-8 years, even with no advancement in the constituent parts. I would almost hesitate to propose my solution for fear of accelerating us towards doom, but the fruit is so low-hanging that I’m either wrong, or others already have the same idea.
Imagine this. Hook ChatGPT up to an image recognition system that describes visual input in real time, and another for audio. Have ChatGPT parse the most relevant information and store it in a database. The naming of file and folders can be done by ChatGPT. When some stimulus prompts GPT, it can search for possible related files in memory and load them into the context window. You could potentially also do this with low-res, labeled video and images. Finally, you’d have the main thread on GPT be able to use a certain syntax to take real actions, like speaking, moving animatronics, or taking actions in a terminal.
Obviously it’s not as simple as I have made it out to be. There is a lot of handwaving in the explanation above. The phrase “hook up” is doing a lot of work, and GPTChat in its current form would need a lot of finetuning, or maybe outright retraining. Perhaps one GPT thread wouldn’t be enough and many would have to be incorporated to handle different parts of the process. Maybe creating a way to coordinate all of this is too big a challenge. That said, on a gut level, I simply no longer believe that it’s out of reach. If OpenAI released a weak, but completely general AI in the next two years, my only shock would be that it didn’t Foom before we got to see it.
Clearly I am a layperson. I do not understand the challenges, or even the viability of my proposal. That said, I would be interested to see if others are of a similar mind, or can easily disavow me of my beliefs.
Cheers all.
Oh, and please delete this post or ask me to if it poses any sort of epistemic risk.