If that’s what “general” means, why not just say “conscious AI”? I suspect the answer is because the field has already come to terms with the fact that conscious machines are philosophically unattainable. Another word was needed that was both sufficiently meaningful and also sufficiently meaningless to refocus (or more accurately misdirect) attention to “The Thing Humans Do That Machines Don’t That Is Very Useful”.
The burden of defining concepts like “AGI” is on the true believers, not the skeptics. Labeling someone “disappointingly stupid” who isn’t making any non falsifiable claims about binary systems doing the “sort of stuff I can do”. Simply making fun of your critics for lacking sufficient imagination to comprehend your epistemically incoherent claims is nothing more than lazy burden shifting.
I do get a kick out of statements like “but you can’t explain to me how you recognize a cat” as if the epistemically weak explanations for human general intelligence excuse or even somehow validate epistemically weak explanations for AGI.
If that’s what “general” means, why not just say “conscious AI”?
What? What does consciousness have to do with ‘the thing humans do that let us land on the Moon, invent plastic, etc.’?
I suspect the answer is because the field has already come to terms with the fact that conscious machines are philosophically unattainable.
Huh? Seems neither relevant nor true. (Like, it’s obviously false that conscious machines are impossible, and obviously false that there’s consensus ‘conscious machines are impossible’.) I don’t understand what you even mean here.
Labeling someone “disappointingly stupid” who isn’t making any non falsifiable claims about binary systems doing the “sort of stuff I can do”.
The dumb thing isn’t rejecting the label “AGI”. It’s thinking that this rejection matters for any of the arguments about AGI risk.
… Also, how on earth is the idea of AGI unfalsifiable? In a world where machines melt whenever you try to get them to par-human capabilities on science, you would have falsified AGI.
The concept “AGI” is indeed unfalsifiable, and we shouldn’t expect it to be falsifiable. It’s a descriptor, not a model or a prediction, and so is the wrong type of thing for falsifiability. It’s also slightly vague, but that’s a different objection altogether.
The nearest corresponding falsifiable predictions are along the lines of “an AGI will be publically verified to exist by year X”, for every value of X, or maybe “this particular design with these specifications will create AGI”. Lots of these are already falsified. Others can be, but have not yet been. There are of course also predictions involving the descriptor “AGI” that cannot be falsified.
“AGI” doesn’t actually make ANY claim at all. That is my primary point, it is an utterly useless term, other than that is sufficiently meaningful and meaningless at the same time that it can be the basis for conveying an intangible concept.
YOU, specifically, have not made a single claim that can be falsified. Please point me at your claim if you think I missed it.
If that’s what “general” means, why not just say “conscious AI”? I suspect the answer is because the field has already come to terms with the fact that conscious machines are philosophically unattainable. Another word was needed that was both sufficiently meaningful and also sufficiently meaningless to refocus (or more accurately misdirect) attention to “The Thing Humans Do That Machines Don’t That Is Very Useful”.
The burden of defining concepts like “AGI” is on the true believers, not the skeptics. Labeling someone “disappointingly stupid” who isn’t making any non falsifiable claims about binary systems doing the “sort of stuff I can do”. Simply making fun of your critics for lacking sufficient imagination to comprehend your epistemically incoherent claims is nothing more than lazy burden shifting.
I do get a kick out of statements like “but you can’t explain to me how you recognize a cat” as if the epistemically weak explanations for human general intelligence excuse or even somehow validate epistemically weak explanations for AGI.
What? What does consciousness have to do with ‘the thing humans do that let us land on the Moon, invent plastic, etc.’?
Huh? Seems neither relevant nor true. (Like, it’s obviously false that conscious machines are impossible, and obviously false that there’s consensus ‘conscious machines are impossible’.) I don’t understand what you even mean here.
The dumb thing isn’t rejecting the label “AGI”. It’s thinking that this rejection matters for any of the arguments about AGI risk.
… Also, how on earth is the idea of AGI unfalsifiable? In a world where machines melt whenever you try to get them to par-human capabilities on science, you would have falsified AGI.
The concept “AGI” is indeed unfalsifiable, and we shouldn’t expect it to be falsifiable. It’s a descriptor, not a model or a prediction, and so is the wrong type of thing for falsifiability. It’s also slightly vague, but that’s a different objection altogether.
The nearest corresponding falsifiable predictions are along the lines of “an AGI will be publically verified to exist by year X”, for every value of X, or maybe “this particular design with these specifications will create AGI”. Lots of these are already falsified. Others can be, but have not yet been. There are of course also predictions involving the descriptor “AGI” that cannot be falsified.
“AGI” doesn’t actually make ANY claim at all. That is my primary point, it is an utterly useless term, other than that is sufficiently meaningful and meaningless at the same time that it can be the basis for conveying an intangible concept.
YOU, specifically, have not made a single claim that can be falsified. Please point me at your claim if you think I missed it.