So, in the general case, something that will take a natural language request, turn it into a family of optimizable models, identify the most promising ones, ask the user to choose, and then return an optimized answer?
Notice that it doesn’t actually have to do anything itself—only give answers. This makes much easier to build and creates an extra safeguard for free.
But is there anything more we can pare away? For example, a provably correct natural language parser is impossible because natural language is ambiguous and inconsistent. Humans certainly don’t always parse it correctly. On the other hand it’s easy for a human to learn a machine language and huge numbers of them have already done so.
So in the chain of events below, the AIs responsibility would be limited to the words in all caps and humans do the rest.
[1 articulate a need] → [2 formulate an unambiguous query] → [3 FIND CANDIDATE MODELS] → [4 user chooses a model or revises step 2] → [5 RETURN OPTIMAL MANIPULATIONS TO THE MODEL] → [6 user implements manipulation or revises step 2]
We have generic algorithms that do step 5. They don’t always scale well, but that’s an engineering problem, that a lot of people in fields outside AI are already working to solve. We have domain-specific algorithms some of which can do a decent job of step 3-- spam filters, recommendation engines, autocorrectors.
So, does this mean that what’s really missing is a generic problem-representor?
Well, that and friendliness, but if we can articulate a coherent, unambiguous code of morality, we will still need a generic problem-representer to actually incorporate it into the optimization procedure.
So, in the general case, something that will take a natural language request, turn it into a family of optimizable models, identify the most promising ones, ask the user to choose, and then return an optimized answer?
Notice that it doesn’t actually have to do anything itself—only give answers. This makes much easier to build and creates an extra safeguard for free.
But is there anything more we can pare away? For example, a provably correct natural language parser is impossible because natural language is ambiguous and inconsistent. Humans certainly don’t always parse it correctly. On the other hand it’s easy for a human to learn a machine language and huge numbers of them have already done so.
So in the chain of events below, the AIs responsibility would be limited to the words in all caps and humans do the rest.
[1 articulate a need] → [2 formulate an unambiguous query] → [3 FIND CANDIDATE MODELS] → [4 user chooses a model or revises step 2] → [5 RETURN OPTIMAL MANIPULATIONS TO THE MODEL] → [6 user implements manipulation or revises step 2]
Continued from above, to reduce TLDR-ness...
We have generic algorithms that do step 5. They don’t always scale well, but that’s an engineering problem, that a lot of people in fields outside AI are already working to solve. We have domain-specific algorithms some of which can do a decent job of step 3-- spam filters, recommendation engines, autocorrectors.
So, does this mean that what’s really missing is a generic problem-representor?
Well, that and friendliness, but if we can articulate a coherent, unambiguous code of morality, we will still need a generic problem-representer to actually incorporate it into the optimization procedure.