“But AI escape would imply not only do we get to AGI, which will take decades probably, but that AGI is so wily and so smart, it outsmarts all of these billions of people that don’t want AI to harm us or kill us.”
That really won’t age well.
Lecun and Ng both had some good research a while ago when the field was tiny but are wildly overrated today.
But I think they do believe what they say. Is it maybe that they are … pointing to something else? when using the word AGI? In fact, I do not even know if there is a commonly accepted definition of AGI.
“But AI escape would imply not only do we get to AGI, which will take decades probably, but that AGI is so wily and so smart, it outsmarts all of these billions of people that don’t want AI to harm us or kill us.”
That really won’t age well.
Lecun and Ng both had some good research a while ago when the field was tiny but are wildly overrated today.
I don’t see either how some people can say that AGI will take decades when GPT4 is already almost there
They say it because they are trying to say the things that make them seem like sober experts, not the things that they actually believe.
I think wanting to seem like sober experts makes them kinda believe the things they expect other people to expect to hear from sober experts.
@lc and @Mateusz, keep up that theorizing. This needs a better explanation.
But I think they do believe what they say. Is it maybe that they are … pointing to something else? when using the word AGI? In fact, I do not even know if there is a commonly accepted definition of AGI.