The implication, as I see it, is that since (by your definition) any sufficiently intelligent AI will be able to determine (and motivated to follow) the wishes of humans, we don’t need to worry about advanced AIs doing things we don’t want.
1. Arguments from definitions are meaningless.
2. You never stated the second parenthetical, which is key to your argument and also on very shaky ground. There’s a big difference between the AI knowing what you want and doing what you want. “The genie knows but doesn’t care,” as it is said.
3. Have you found a way to make programs that never have unintended side effects? No? Then “we wouldn’t want this in the first place” doesn’t mean “it won’t happen”.
The implication, as I see it, is that since (by your definition) any sufficiently intelligent AI will be able to determine (and motivated to follow) the wishes of humans, we don’t need to worry about advanced AIs doing things we don’t want.
1. Arguments from definitions are meaningless.
2. You never stated the second parenthetical, which is key to your argument and also on very shaky ground. There’s a big difference between the AI knowing what you want and doing what you want. “The genie knows but doesn’t care,” as it is said.
3. Have you found a way to make programs that never have unintended side effects? No? Then “we wouldn’t want this in the first place” doesn’t mean “it won’t happen”.