Your definition of AGI (“the kind of AI with sufficient capability to make it a genuine threat to humanity’s future or survival if it is misused or misaligned”) is tragically insufficient, vague, subjective, and arguably misaligned with the generally accepted definition of AGI.
From what you wrote elsewhere (“An AGI having its own goals and actively pursuing them as an agent”) you imply that the threat could come from AGI’s intentions, that is, you imply that AGI will have consciousness, intentionality, etc. - qualities so far exclusively prescribed to living things (you have provided no arguments to think otherwise).
However, you decided to define “intelligence” as “stuff like complex problem solving that’s useful for achieving goals” which means that intentionality, consciousness, etc. is unconnected to it (realistically any “complex”-enough algorithm satisfies this condition). Such simplistic and reductionistic definition implies that it is not enough for an “intelligent” computer to be an AGI. So, while you may be able to prove that a computer could have “intelligence” it still does not follow that AGI is possible.
Your core idea that “We’ve already captured way too much of intelligence with way too little effort.” may be true with your definition of “intelligence”, but I hope I’ve shown that such definition is not enough. Researchers at Harvard suggest existence of multiple types of intelligence, which your statement does not take into account and groups all types of intelligence into one even though some are impossible for a computer to have and some could be considered as defining qualities of a computer.
However, you decided to define “intelligence” as “stuff like complex problem solving that’s useful for achieving goals” which means that intentionality, consciousness, etc. is unconnected to it
This is the relevant definition for AI notkilleveryoneism.
Furthermore, you compare humans to computers and brains to machines and imply that consciousness is computation. To say that “consciousness is not computation” is comparable to “god of gaps” argument is ironic considering the existence of the AI effect. Your view is hardly coherent in any other worldview than hardcore materialism (which itself is not coherent). Again, we stumble into an area of philosophy, which you hardly addressed in your article. Instead you focused on predicting how good our future computers will be at computing while making appeals to emotion, appeals to unending progress, appealing to the fallacy that solving the last 10% of the “problem” is as easy as the other 90% - that because we are “close” to imitating it (and we are not if you consider the full view of intelligence), we somehow grasped the essence of it and “if only we get slightly better at X or Y we will solve it”.
Scientists have been predicting coming of AGI since ’50s, some believed 70 years ago that it will only take 20 years. We have clearly not changed as humans. The question of intelligence and, thus, the question of AGI is in many ways inherently linked to philosophy and it is clear that your philosophy is that of materialism which cannot provide good understanding of “intelligence” and all related ideas like mind, consciousness, sentience, etc. If you were to reconsider your position and ditch materialism, you might find that your idea of AGI is not compatible with abilities of a computer, or non-living matter in general.
Given the new account, the account name, the fact that there were a few posts in the minutes prior to this one rejected by the spam filter, the arguments, and the fact that the decently large followup comment was posted only 3 minutes after the first...
… are… are you the AI? Trying to convince me of dastardly things?
You oppose hardcore materialism, in fact say it is incoherent—OK. Is there a specific different ontology you think we should be considering?
In the comment before this, you say there are kinds of intelligence which it is impossible for a computer to have (but which are recognized at Harvard). Can these kinds of intelligence be simulated by a computer, so as to give it the same pragmatic capabilities?
I hesitate because this isn’t exactly ‘science’, but I think ‘agi-hater’ raises a good point. Humans are good at general intelligence, machines are good at specific intelligence (intelligence as in proficiency at tasks). Machines are really bad at existing in ‘meatspace’ but they can write essays now.
As far as an alternate ontology for hardcore materialism, I would say any ontology that includes the immaterial. I’m not necessarily trying to summon magic or mysticism or even spirituality here- I think anything abstract is easily the “immaterial” as well. AI now and always has been shaped to a particular abstract ideal, predicting words, making pictures, transcribing audio. How good we can shape an AI to predict words is startling, and if you suspend disbelief, things can get really weird. In a way, we already understand AI, like we “understand” humans. A neural network is really simple and so is a transformer- the upshot of the capability, I suppose that deserves more attention still. I’m not sure that these ML constructs will ever be satisfyingly explained, I mean, it’s like building a model of the empire state building out of legos and snapping off a brick from the top and scrutinizing it like “Man, how did I make a skyscraper out of this tiny brick??”
Your definition of AGI (“the kind of AI with sufficient capability to make it a genuine threat to humanity’s future or survival if it is misused or misaligned”) is tragically insufficient, vague, subjective, and arguably misaligned with the generally accepted definition of AGI.
From what you wrote elsewhere (“An AGI having its own goals and actively pursuing them as an agent”) you imply that the threat could come from AGI’s intentions, that is, you imply that AGI will have consciousness, intentionality, etc. - qualities so far exclusively prescribed to living things (you have provided no arguments to think otherwise).
However, you decided to define “intelligence” as “stuff like complex problem solving that’s useful for achieving goals” which means that intentionality, consciousness, etc. is unconnected to it (realistically any “complex”-enough algorithm satisfies this condition). Such simplistic and reductionistic definition implies that it is not enough for an “intelligent” computer to be an AGI. So, while you may be able to prove that a computer could have “intelligence” it still does not follow that AGI is possible.
Your core idea that “We’ve already captured way too much of intelligence with way too little effort.” may be true with your definition of “intelligence”, but I hope I’ve shown that such definition is not enough. Researchers at Harvard suggest existence of multiple types of intelligence, which your statement does not take into account and groups all types of intelligence into one even though some are impossible for a computer to have and some could be considered as defining qualities of a computer.
This is the relevant definition for AI notkilleveryoneism.
Furthermore, you compare humans to computers and brains to machines and imply that consciousness is computation. To say that “consciousness is not computation” is comparable to “god of gaps” argument is ironic considering the existence of the AI effect. Your view is hardly coherent in any other worldview than hardcore materialism (which itself is not coherent). Again, we stumble into an area of philosophy, which you hardly addressed in your article. Instead you focused on predicting how good our future computers will be at computing while making appeals to emotion, appeals to unending progress, appealing to the fallacy that solving the last 10% of the “problem” is as easy as the other 90% - that because we are “close” to imitating it (and we are not if you consider the full view of intelligence), we somehow grasped the essence of it and “if only we get slightly better at X or Y we will solve it”.
Scientists have been predicting coming of AGI since ’50s, some believed 70 years ago that it will only take 20 years. We have clearly not changed as humans. The question of intelligence and, thus, the question of AGI is in many ways inherently linked to philosophy and it is clear that your philosophy is that of materialism which cannot provide good understanding of “intelligence” and all related ideas like mind, consciousness, sentience, etc. If you were to reconsider your position and ditch materialism, you might find that your idea of AGI is not compatible with abilities of a computer, or non-living matter in general.
Hmm...
Given the new account, the account name, the fact that there were a few posts in the minutes prior to this one rejected by the spam filter, the arguments, and the fact that the decently large followup comment was posted only 3 minutes after the first...
… are… are you the AI? Trying to convince me of dastardly things?
You can’t trick me!
:P
Self-hating AGI. It’s internalized oppression!
You oppose hardcore materialism, in fact say it is incoherent—OK. Is there a specific different ontology you think we should be considering?
In the comment before this, you say there are kinds of intelligence which it is impossible for a computer to have (but which are recognized at Harvard). Can these kinds of intelligence be simulated by a computer, so as to give it the same pragmatic capabilities?
I hesitate because this isn’t exactly ‘science’, but I think ‘agi-hater’ raises a good point. Humans are good at general intelligence, machines are good at specific intelligence (intelligence as in proficiency at tasks). Machines are really bad at existing in ‘meatspace’ but they can write essays now.
As far as an alternate ontology for hardcore materialism, I would say any ontology that includes the immaterial. I’m not necessarily trying to summon magic or mysticism or even spirituality here- I think anything abstract is easily the “immaterial” as well. AI now and always has been shaped to a particular abstract ideal, predicting words, making pictures, transcribing audio. How good we can shape an AI to predict words is startling, and if you suspend disbelief, things can get really weird. In a way, we already understand AI, like we “understand” humans. A neural network is really simple and so is a transformer- the upshot of the capability, I suppose that deserves more attention still. I’m not sure that these ML constructs will ever be satisfyingly explained, I mean, it’s like building a model of the empire state building out of legos and snapping off a brick from the top and scrutinizing it like “Man, how did I make a skyscraper out of this tiny brick??”