Furthermore, you compare humans to computers and brains to machines and imply that consciousness is computation. To say that “consciousness is not computation” is comparable to “god of gaps” argument is ironic considering the existence of the AI effect. Your view is hardly coherent in any other worldview than hardcore materialism (which itself is not coherent). Again, we stumble into an area of philosophy, which you hardly addressed in your article. Instead you focused on predicting how good our future computers will be at computing while making appeals to emotion, appeals to unending progress, appealing to the fallacy that solving the last 10% of the “problem” is as easy as the other 90% - that because we are “close” to imitating it (and we are not if you consider the full view of intelligence), we somehow grasped the essence of it and “if only we get slightly better at X or Y we will solve it”.
Scientists have been predicting coming of AGI since ’50s, some believed 70 years ago that it will only take 20 years. We have clearly not changed as humans. The question of intelligence and, thus, the question of AGI is in many ways inherently linked to philosophy and it is clear that your philosophy is that of materialism which cannot provide good understanding of “intelligence” and all related ideas like mind, consciousness, sentience, etc. If you were to reconsider your position and ditch materialism, you might find that your idea of AGI is not compatible with abilities of a computer, or non-living matter in general.
Given the new account, the account name, the fact that there were a few posts in the minutes prior to this one rejected by the spam filter, the arguments, and the fact that the decently large followup comment was posted only 3 minutes after the first...
… are… are you the AI? Trying to convince me of dastardly things?
You oppose hardcore materialism, in fact say it is incoherent—OK. Is there a specific different ontology you think we should be considering?
In the comment before this, you say there are kinds of intelligence which it is impossible for a computer to have (but which are recognized at Harvard). Can these kinds of intelligence be simulated by a computer, so as to give it the same pragmatic capabilities?
I hesitate because this isn’t exactly ‘science’, but I think ‘agi-hater’ raises a good point. Humans are good at general intelligence, machines are good at specific intelligence (intelligence as in proficiency at tasks). Machines are really bad at existing in ‘meatspace’ but they can write essays now.
As far as an alternate ontology for hardcore materialism, I would say any ontology that includes the immaterial. I’m not necessarily trying to summon magic or mysticism or even spirituality here- I think anything abstract is easily the “immaterial” as well. AI now and always has been shaped to a particular abstract ideal, predicting words, making pictures, transcribing audio. How good we can shape an AI to predict words is startling, and if you suspend disbelief, things can get really weird. In a way, we already understand AI, like we “understand” humans. A neural network is really simple and so is a transformer- the upshot of the capability, I suppose that deserves more attention still. I’m not sure that these ML constructs will ever be satisfyingly explained, I mean, it’s like building a model of the empire state building out of legos and snapping off a brick from the top and scrutinizing it like “Man, how did I make a skyscraper out of this tiny brick??”
Furthermore, you compare humans to computers and brains to machines and imply that consciousness is computation. To say that “consciousness is not computation” is comparable to “god of gaps” argument is ironic considering the existence of the AI effect. Your view is hardly coherent in any other worldview than hardcore materialism (which itself is not coherent). Again, we stumble into an area of philosophy, which you hardly addressed in your article. Instead you focused on predicting how good our future computers will be at computing while making appeals to emotion, appeals to unending progress, appealing to the fallacy that solving the last 10% of the “problem” is as easy as the other 90% - that because we are “close” to imitating it (and we are not if you consider the full view of intelligence), we somehow grasped the essence of it and “if only we get slightly better at X or Y we will solve it”.
Scientists have been predicting coming of AGI since ’50s, some believed 70 years ago that it will only take 20 years. We have clearly not changed as humans. The question of intelligence and, thus, the question of AGI is in many ways inherently linked to philosophy and it is clear that your philosophy is that of materialism which cannot provide good understanding of “intelligence” and all related ideas like mind, consciousness, sentience, etc. If you were to reconsider your position and ditch materialism, you might find that your idea of AGI is not compatible with abilities of a computer, or non-living matter in general.
Hmm...
Given the new account, the account name, the fact that there were a few posts in the minutes prior to this one rejected by the spam filter, the arguments, and the fact that the decently large followup comment was posted only 3 minutes after the first...
… are… are you the AI? Trying to convince me of dastardly things?
You can’t trick me!
:P
Self-hating AGI. It’s internalized oppression!
You oppose hardcore materialism, in fact say it is incoherent—OK. Is there a specific different ontology you think we should be considering?
In the comment before this, you say there are kinds of intelligence which it is impossible for a computer to have (but which are recognized at Harvard). Can these kinds of intelligence be simulated by a computer, so as to give it the same pragmatic capabilities?
I hesitate because this isn’t exactly ‘science’, but I think ‘agi-hater’ raises a good point. Humans are good at general intelligence, machines are good at specific intelligence (intelligence as in proficiency at tasks). Machines are really bad at existing in ‘meatspace’ but they can write essays now.
As far as an alternate ontology for hardcore materialism, I would say any ontology that includes the immaterial. I’m not necessarily trying to summon magic or mysticism or even spirituality here- I think anything abstract is easily the “immaterial” as well. AI now and always has been shaped to a particular abstract ideal, predicting words, making pictures, transcribing audio. How good we can shape an AI to predict words is startling, and if you suspend disbelief, things can get really weird. In a way, we already understand AI, like we “understand” humans. A neural network is really simple and so is a transformer- the upshot of the capability, I suppose that deserves more attention still. I’m not sure that these ML constructs will ever be satisfyingly explained, I mean, it’s like building a model of the empire state building out of legos and snapping off a brick from the top and scrutinizing it like “Man, how did I make a skyscraper out of this tiny brick??”