Have you seen Jacob Steinhardt’s article https://www.lesswrong.com/posts/WZXqNYbJhtidjRXSi/what-will-gpt-2030-look-like ? It seems like his prediction for a 2030 AI would already meet the threshold for being a transformative AI, at least in aspects not relating to robotics. But you put this at less than 1% likely at a much longer timescale. What do you think of that writeup, where do you disagree, and are there any places where you might consider recalibrating?
As an OpenAI employee I cannot say too much about short-term expectations for GPT, but I generally agree with most of his subpoints; e.g., running many copies, speeding up with additional compute, having way better capabilities than today, have more modalities than today. All of that sounds reasonable. The leap for me is (a) believing that results in transformative AGI and (b) figuring out how to get these things to learn (efficiently) from experience. So in the end I find myself pretty unmoved by his article (which is high quality, to be sure).
Have you seen Jacob Steinhardt’s article https://www.lesswrong.com/posts/WZXqNYbJhtidjRXSi/what-will-gpt-2030-look-like ? It seems like his prediction for a 2030 AI would already meet the threshold for being a transformative AI, at least in aspects not relating to robotics. But you put this at less than 1% likely at a much longer timescale. What do you think of that writeup, where do you disagree, and are there any places where you might consider recalibrating?
As an OpenAI employee I cannot say too much about short-term expectations for GPT, but I generally agree with most of his subpoints; e.g., running many copies, speeding up with additional compute, having way better capabilities than today, have more modalities than today. All of that sounds reasonable. The leap for me is (a) believing that results in transformative AGI and (b) figuring out how to get these things to learn (efficiently) from experience. So in the end I find myself pretty unmoved by his article (which is high quality, to be sure).