Could we convincingly fake AGI right now with no technological improvements at all? Suppose you took this face and speech synthesis/recognition and hooked it up to GPT-3 with some appropriate prompt (or even retrain it on a large set of conversations if you want it to work better), and then attached the whole thing to a Boston Dynamics Atlas, maybe with some simple stereotyped motions built in like jumping and pacing that are set to trigger at random intervals, or in response to the frequency of words being output by the NLP system.
Put the whole thing in a room with a window looking in and have people come in and converse with it, and I think you could convince even a careful non-expert that you’ve built something near-human level. Other than some mechanical engineering skill to build the whole thing, and getting the GPT-3 API to work with Sofia’s speech synthesis, and programming the Atlas, it wouldn’t even be difficult. If you did something like that, how convincing would it likely be?
Could we convincingly fake AGI right now with no technological improvements at all? Suppose you took this face and speech synthesis/recognition and hooked it up to GPT-3 with some appropriate prompt (or even retrain it on a large set of conversations if you want it to work better), and then attached the whole thing to a Boston Dynamics Atlas, maybe with some simple stereotyped motions built in like jumping and pacing that are set to trigger at random intervals, or in response to the frequency of words being output by the NLP system.
Put the whole thing in a room with a window looking in and have people come in and converse with it, and I think you could convince even a careful non-expert that you’ve built something near-human level. Other than some mechanical engineering skill to build the whole thing, and getting the GPT-3 API to work with Sofia’s speech synthesis, and programming the Atlas, it wouldn’t even be difficult. If you did something like that, how convincing would it likely be?