They are entitled to assume they could be applied, not necessarily that they would be. At some point, there’s going to have to be something that tells the AI to, in effect, “use the knowledge and definitions in your knowledge base to honestly do X [X = some NL objective]”. This gap may be easy to bridge, or hard; no-one’s suggested any way of bridging it so far.
There’s only a gap if you start from the assumption that a compartmentalised UF is in some way easy, natural or preferable. However, your side of the debate has never shown that.
At some point, there’s going to have to be something that tells the AI to, in effect, “use the knowledge and definitions in your knowledge base to honestly do X [X = some NL objective]”.
No...you don’t have to show a fan how to make a whirring sound… use of updatable knowledge to specify goals is a natural consequence of some designs.
It might be possible; it might be trivial.
You are assuming it is difficult, with little evidence.
But there’s no evidence in that direction so far, and the designs that people have actually proposed have been disastrous.
Designs that bridge a gap, or designs that intrinsically don’t have one?
I’ll work at bridging this gap, and see if I can solve it to some level of approximation.
Why not examine the assumption that there has to be a gap?
There’s only a gap if you start from the assumption that a compartmentalised UF is in some way easy, natural or preferable.
? Of course there’s a gap. The AI doesn’t start with full NL understanding. So we have to write the AI’s goals before the AI understands what the symbols mean.
Even if the AI started with full NL understanding, we still would have to somehow program it to follow our NL instructions. And we can’t do that initial programming using NL, of course.
Of course there’s a gap. The AI doesn’t start with full NL understanding.
Since you are talking in terms of a general counterargument, I don;t think you can appeal to a specific architecture.
So we have to write the AI’s goals before the AI understands what the symbols mean.
Which would be a problem if it designed to attempt to execute NL instructions without checking if it understands them...which is a bit clown car-ish. An AI that is capable of learning NL as it goes along is an AI that has gernal a goal to get language right. Why assume it would not care about one specific sentence?
Even if the AI started with full NL understanding, we still would have to somehow program it to follow our NL instructions
Y-e-es? Why assume “it needs to follow instructions” equates to “it would simplify the instructions it’s following” rather than something else?
There’s only a gap if you start from the assumption that a compartmentalised UF is in some way easy, natural or preferable. However, your side of the debate has never shown that.
No...you don’t have to show a fan how to make a whirring sound… use of updatable knowledge to specify goals is a natural consequence of some designs.
You are assuming it is difficult, with little evidence.
Designs that bridge a gap, or designs that intrinsically don’t have one?
Why not examine the assumption that there has to be a gap?
? Of course there’s a gap. The AI doesn’t start with full NL understanding. So we have to write the AI’s goals before the AI understands what the symbols mean.
Even if the AI started with full NL understanding, we still would have to somehow program it to follow our NL instructions. And we can’t do that initial programming using NL, of course.
Since you are talking in terms of a general counterargument, I don;t think you can appeal to a specific architecture.
Which would be a problem if it designed to attempt to execute NL instructions without checking if it understands them...which is a bit clown car-ish. An AI that is capable of learning NL as it goes along is an AI that has gernal a goal to get language right. Why assume it would not care about one specific sentence?
Y-e-es? Why assume “it needs to follow instructions” equates to “it would simplify the instructions it’s following” rather than something else?