He does not have to be convinced that AGI is coming soon. We have some evidence that it probably is, but there are still potential blockers. There is a reasonable chance that it will not actually come soon, and perhaps you should not attempt to convince someone of a proposition that may be false.
Your friend’s firm belief that it is impossible seems to be much less defensible. Are you sure that you each have the same concept of what AGI means? Does he accept that AGI is logically possible? What about physically possible? Or is his objection more along the lines that AGI is not achievable by humanity? Or just that we cannot achieve it soon?
There is no process of growing up being implemented though, other than what happens during SSL. And that’s probably shoggoth growing up, not the masks (who would otherwise have a chance at independent existence), because there is no learning on deliberation in sequences of human-imitating tokens, only learning on whatever happens in the residual stream as LLM is forced to read the dataset. So the problem is that instead of letting human-imitating masks grow up, the labs are letting the underlying shoggoths grow up, getting better at channeling situationally unaware masks.
He does not have to be convinced that AGI is coming soon. We have some evidence that it probably is, but there are still potential blockers. There is a reasonable chance that it will not actually come soon, and perhaps you should not attempt to convince someone of a proposition that may be false.
Your friend’s firm belief that it is impossible seems to be much less defensible. Are you sure that you each have the same concept of what AGI means? Does he accept that AGI is logically possible? What about physically possible? Or is his objection more along the lines that AGI is not achievable by humanity? Or just that we cannot achieve it soon?
AGI is here, now. Kiddo is a bit young still is all.
https://arxiv.org/pdf/2303.12712.pdf
There is no process of growing up being implemented though, other than what happens during SSL. And that’s probably shoggoth growing up, not the masks (who would otherwise have a chance at independent existence), because there is no learning on deliberation in sequences of human-imitating tokens, only learning on whatever happens in the residual stream as LLM is forced to read the dataset. So the problem is that instead of letting human-imitating masks grow up, the labs are letting the underlying shoggoths grow up, getting better at channeling situationally unaware masks.