We have to start somewhere, and “we do not know what to do” is not starting.
Also, this whole thing about “what I really meant-” I thing that we can break down these into specific failure modes, and address them individually.
-One of the failure modes is poor contextual reasoning. In order to discern what a person really means, you have to reason about the context of their communication.
-Another failure mode involves not checking activities against norms and standards. There are a number of ways to arrive at the conclusion that Mom is be to rescued from the house alive and hopefully uninjured.
-The machines in these examples do not seem to forecast or simulate potential outcomes, and judege them against external standards.
“Magical source of wisdom?” No. What we are talking about is whether is it possible to design a certain kind of AGI-one that is safe and friendly.
We have shown this to be a complicated task. However, we have not fleshed out all the possible ways, and therefore we cannot falsify the claims of people who will insist that it can be done.
Poor contextual reasoning happens many times a day among humans. Our threads are full of it. In many cases consequences are neglectable. If the context is unclear and a phrase can be interpreted one way or the other, no magical wisdom is there:
Clarification is existential: ASK
Clarification is nice to have: Say something that does not reveal that you have no idea what is meant and try to stimulate that the other reveals contextual information.
Clarification unnecessary or even unintended: stay in the blind or keep the other in the blind.
Correct associations with few contextual hints is what AGI is about. Narrow AI translation software is even today quite good to figure out context by brute force statistical similarity analysis.
We have to start somewhere, and “we do not know what to do” is not starting.
Also, this whole thing about “what I really meant-” I thing that we can break down these into specific failure modes, and address them individually.
-One of the failure modes is poor contextual reasoning. In order to discern what a person really means, you have to reason about the context of their communication.
-Another failure mode involves not checking activities against norms and standards. There are a number of ways to arrive at the conclusion that Mom is be to rescued from the house alive and hopefully uninjured.
-The machines in these examples do not seem to forecast or simulate potential outcomes, and judege them against external standards.
“Magical source of wisdom?” No. What we are talking about is whether is it possible to design a certain kind of AGI-one that is safe and friendly.
We have shown this to be a complicated task. However, we have not fleshed out all the possible ways, and therefore we cannot falsify the claims of people who will insist that it can be done.
Poor contextual reasoning happens many times a day among humans. Our threads are full of it. In many cases consequences are neglectable. If the context is unclear and a phrase can be interpreted one way or the other, no magical wisdom is there:
Clarification is existential: ASK
Clarification is nice to have: Say something that does not reveal that you have no idea what is meant and try to stimulate that the other reveals contextual information.
Clarification unnecessary or even unintended: stay in the blind or keep the other in the blind.
Correct associations with few contextual hints is what AGI is about. Narrow AI translation software is even today quite good to figure out context by brute force statistical similarity analysis.