There’s only a gap if you start from the assumption that a compartmentalised UF is in some way easy, natural or preferable.
? Of course there’s a gap. The AI doesn’t start with full NL understanding. So we have to write the AI’s goals before the AI understands what the symbols mean.
Even if the AI started with full NL understanding, we still would have to somehow program it to follow our NL instructions. And we can’t do that initial programming using NL, of course.
Of course there’s a gap. The AI doesn’t start with full NL understanding.
Since you are talking in terms of a general counterargument, I don;t think you can appeal to a specific architecture.
So we have to write the AI’s goals before the AI understands what the symbols mean.
Which would be a problem if it designed to attempt to execute NL instructions without checking if it understands them...which is a bit clown car-ish. An AI that is capable of learning NL as it goes along is an AI that has gernal a goal to get language right. Why assume it would not care about one specific sentence?
Even if the AI started with full NL understanding, we still would have to somehow program it to follow our NL instructions
Y-e-es? Why assume “it needs to follow instructions” equates to “it would simplify the instructions it’s following” rather than something else?
? Of course there’s a gap. The AI doesn’t start with full NL understanding. So we have to write the AI’s goals before the AI understands what the symbols mean.
Even if the AI started with full NL understanding, we still would have to somehow program it to follow our NL instructions. And we can’t do that initial programming using NL, of course.
Since you are talking in terms of a general counterargument, I don;t think you can appeal to a specific architecture.
Which would be a problem if it designed to attempt to execute NL instructions without checking if it understands them...which is a bit clown car-ish. An AI that is capable of learning NL as it goes along is an AI that has gernal a goal to get language right. Why assume it would not care about one specific sentence?
Y-e-es? Why assume “it needs to follow instructions” equates to “it would simplify the instructions it’s following” rather than something else?