And yet, some people seem to be generalizedly “better at things” than others. And I am more afraid of a broken human person (he might shoot me) than a broken teacup.
It is certainly possible that “intelligence” is a purely intrinsic property of my own mind, a way to measure “how much do I need to use the intentional stance to model another being, rather than model-based reductionism?” But this is still a fact about reality, since my mind exists in reality. And in that case “AI alignment” would still need to be a necessary field, because there are objects that have a larger minimal-complexity-to-express than the size of my mind, and I would want knowledge that allows me to approximate their behavior.
But I can’t robustly define words like “intelligence” in a way that beats the teacup test. So overall I am unwilling to say “the entire field of AI Alignment is bunk because intelligence isn’t a meaningful concept?” I just feel very confused.
I don’t disagree with any of this.
And yet, some people seem to be generalizedly “better at things” than others. And I am more afraid of a broken human person (he might shoot me) than a broken teacup.
It is certainly possible that “intelligence” is a purely intrinsic property of my own mind, a way to measure “how much do I need to use the intentional stance to model another being, rather than model-based reductionism?” But this is still a fact about reality, since my mind exists in reality. And in that case “AI alignment” would still need to be a necessary field, because there are objects that have a larger minimal-complexity-to-express than the size of my mind, and I would want knowledge that allows me to approximate their behavior.
But I can’t robustly define words like “intelligence” in a way that beats the teacup test. So overall I am unwilling to say “the entire field of AI Alignment is bunk because intelligence isn’t a meaningful concept?” I just feel very confused.