I didn’t mean to imply we should wait for AI to pass the Turing test before doing alignment work. Perhaps the disagreement comes down to you thinking “We should take GPT-3 as a fire-alarm for AGI and must push forward AI alignment work” whereas I’m thinking “There is and will be no fire-alarm and we must push forward AI alignment work”
Ah, well said. Perhaps we don’t disagree then. Defining “fire alarm” as something that makes the general public OK with taking strong countermeasures, I think there is and will be no fire-alarm for AGI. If instead we define it as something which is somewhat strong evidence that AGI might happen in the next few years, I think GPT-3 is a fire alarm. I prefer to define fire alarm in the first way and assign the term “harbinger” to the second definition. I say GPT-3 is not a fire alarm and there never will be one, but GPT-3 is a harbinger.
Do you think GPT-3 is a harbinger? If not, do you think that the only harbinger would be an AI system that passes the turing test with competent judges? If so, then it seems like you think there won’t ever be a harbinger.
I don’t think GPT-3 is a harbinger. I’m not sure if there ever will be a harbinger (at least to the public); leaning towards no. An AI system that passes the Turing test wouldn’t be a harbinger, it’s the real deal.
OK, cool. Interesting. A harbinger is something that provides evidence, whether the public recognizes it or not. I think if takeoff is sufficiently fast, there won’t be any harbingers. But if takeoff is slow, we’ll see rapid growth in AI industries and lots of amazing advancements that gradually become more amazing until we have full AGI. And so there will be plenty of harbingers. Do you think takeoff will probably be very fast?
Yeah the terms are always a bit vague; as far as existence proof for AGI goes there’s already humans and evolution, so my definition of a harbinger would be something like ‘A prototype that clearly shows no more conceptual breakthroughs are needed for AGI’.
I still think we’re at least one breakthrough away from that point, however that belief is dampened by Ilya Sutskever’s position on this whose opinion I greatly respect. But either way GPT-3 in particular just doesn’t stand out to me from the rest of DL achievements over the years, from AlexNet to AlphaGo to OpenAI5.
Fair enough, and well said. I don’t think we really disagree then, I just have a lower threshold for how much evidence counts as a harbinger, and that’s just a difference in how we use the words. I also think probably we’ll need at least one more conceptual breakthrough.
What does Ilya Sutskever think? Can you link to something I could read on the subject?
I find that he has an exceptionally sharp intuition about why deep learning works, from the original AlexNet to Deep Double Descent. You can see him predicting the progress in NLP here
I didn’t mean to imply we should wait for AI to pass the Turing test before doing alignment work. Perhaps the disagreement comes down to you thinking “We should take GPT-3 as a fire-alarm for AGI and must push forward AI alignment work” whereas I’m thinking “There is and will be no fire-alarm and we must push forward AI alignment work”
Ah, well said. Perhaps we don’t disagree then. Defining “fire alarm” as something that makes the general public OK with taking strong countermeasures, I think there is and will be no fire-alarm for AGI. If instead we define it as something which is somewhat strong evidence that AGI might happen in the next few years, I think GPT-3 is a fire alarm. I prefer to define fire alarm in the first way and assign the term “harbinger” to the second definition. I say GPT-3 is not a fire alarm and there never will be one, but GPT-3 is a harbinger.
Do you think GPT-3 is a harbinger? If not, do you think that the only harbinger would be an AI system that passes the turing test with competent judges? If so, then it seems like you think there won’t ever be a harbinger.
I don’t think GPT-3 is a harbinger. I’m not sure if there ever will be a harbinger (at least to the public); leaning towards no. An AI system that passes the Turing test wouldn’t be a harbinger, it’s the real deal.
OK, cool. Interesting. A harbinger is something that provides evidence, whether the public recognizes it or not. I think if takeoff is sufficiently fast, there won’t be any harbingers. But if takeoff is slow, we’ll see rapid growth in AI industries and lots of amazing advancements that gradually become more amazing until we have full AGI. And so there will be plenty of harbingers. Do you think takeoff will probably be very fast?
Yeah the terms are always a bit vague; as far as existence proof for AGI goes there’s already humans and evolution, so my definition of a harbinger would be something like ‘A prototype that clearly shows no more conceptual breakthroughs are needed for AGI’.
I still think we’re at least one breakthrough away from that point, however that belief is dampened by Ilya Sutskever’s position on this whose opinion I greatly respect. But either way GPT-3 in particular just doesn’t stand out to me from the rest of DL achievements over the years, from AlexNet to AlphaGo to OpenAI5.
And yes, I believe there will be fast takeoff.
Fair enough, and well said. I don’t think we really disagree then, I just have a lower threshold for how much evidence counts as a harbinger, and that’s just a difference in how we use the words. I also think probably we’ll need at least one more conceptual breakthrough.
What does Ilya Sutskever think? Can you link to something I could read on the subject?
You can listen to his thoughts on AGI in this video
I find that he has an exceptionally sharp intuition about why deep learning works, from the original AlexNet to Deep Double Descent. You can see him predicting the progress in NLP here