How significant of a limitation would it be that a YouTube AI can’t recursively improve its own architecture?
I guess that depends on whether you believe that AIs like GPT-3 are evidence for short timelines and that GPT-N might be an AGI. If you do, I don’t see the lack of self-improvement as a big limitation; if you don’t, maybe because you believe recursive self-improvement is needed, then obviously that’s a big limitation.
Personally I do think the GPTs models are evidence that recursive self-improvement is not necessarily needed for AGI, but I’m interested in counter arguments.
How significant of a limitation would it be that it can only recommend existing videos rather than create its own?
I initially believed that to be a massive limitation. But Lê pointed out that with so much content already there and so much additional content added each hour, “the closest YouTube video” to what you want is probably quite close, if you want something broad enough (A query like “someone arguing against Thomas Kuhn” instead of “A frame like that, then a frame like that, then...”)
For a takeoff scenario, though, I would think you need something like human AI researchers using the recommendation algorithm to help them design a more general AI, and I’m not sure how much content would be relevant to that.
I guess that depends on whether you believe that AIs like GPT-3 are evidence for short timelines and that GPT-N might be an AGI. If you do, I don’t see the lack of self-improvement as a big limitation; if you don’t, maybe because you believe recursive self-improvement is needed, then obviously that’s a big limitation.
Personally I do think the GPTs models are evidence that recursive self-improvement is not necessarily needed for AGI, but I’m interested in counter arguments.
I initially believed that to be a massive limitation. But Lê pointed out that with so much content already there and so much additional content added each hour, “the closest YouTube video” to what you want is probably quite close, if you want something broad enough (A query like “someone arguing against Thomas Kuhn” instead of “A frame like that, then a frame like that, then...”)
For a takeoff scenario, though, I would think you need something like human AI researchers using the recommendation algorithm to help them design a more general AI, and I’m not sure how much content would be relevant to that.