To make sure I understand: you are saying (a) that our AIs are fairly likely to get significantly more sample-efficient in the near future, and (b) even if they don’t, there’s plenty of data around.
I think (b) isn’t a good response if you think that transformative AI will probably need to be human brain sized and you believe the scaling laws and you think that short-horizon training won’t be enough. (Because then we’ll need something like 10^30+ FLOP to train TAI, which is plausibly reachable in 20 years but probably not in 10. That said, I think short-horizon training might be enough.
I think (a) is a good response, but it faces the objection: Why now? Why should we expect sample-efficiency to get dramatically better in the near future, when it has gotten only very slowly better in the past? (Has it? I’m guessing so, maybe I’m wrong?)
To make sure I understand: you are saying (a) that our AIs are fairly likely to get significantly more sample-efficient in the near future, and (b) even if they don’t, there’s plenty of data around.
I think (b) isn’t a good response if you think that transformative AI will probably need to be human brain sized and you believe the scaling laws and you think that short-horizon training won’t be enough. (Because then we’ll need something like 10^30+ FLOP to train TAI, which is plausibly reachable in 20 years but probably not in 10. That said, I think short-horizon training might be enough.
I think (a) is a good response, but it faces the objection: Why now? Why should we expect sample-efficiency to get dramatically better in the near future, when it has gotten only very slowly better in the past? (Has it? I’m guessing so, maybe I’m wrong?)