Furthermore, you compare humans to computers and brains to machines and imply that consciousness is computation. To say that “consciousness is not computation” is comparable to “god of gaps” argument is ironic considering the existence of the AI effect. Your view is hardly coherent in any other worldview than hardcore materialism (which itself is not coherent). Again, we stumble into an area of philosophy, which you hardly addressed in your article. Instead you focused on predicting how good our future computers will be at computing while making appeals to emotion, appeals to unending progress, appealing to the fallacy that solving the last 10% of the “problem” is as easy as the other 90% - that because we are “close” to imitating it (and we are not if you consider the full view of intelligence), we somehow grasped the essence of it and “if only we get slightly better at X or Y we will solve it”.
Scientists have been predicting coming of AGI since ’50s, some believed 70 years ago that it will only take 20 years. We have clearly not changed as humans. The question of intelligence and, thus, the question of AGI is in many ways inherently linked to philosophy and it is clear that your philosophy is that of materialism which cannot provide good understanding of “intelligence” and all related ideas like mind, consciousness, sentience, etc. If you were to reconsider your position and ditch materialism, you might find that your idea of AGI is not compatible with abilities of a computer, or non-living matter in general.
It does not follow that computationally cheaper things are more likely to happen than computationally expensive things. Moreover, describing something as “computationally difficult” is a subjective value judgment (unless you can reasonably prove otherwise) and implies that all actions/events can be reduced to some form of computation.