...If you are dealing with an entity that can’t add context (or ask for clarifications) the way a human would.
Can we note you’ve moved from “the problem is not open ended” to “the AGI is programmed in such a way that the problem is not open ended”, which is the whole of the problem.
In a sense. Non openness is a non problem for fairly limited AIs, because their limitations prevent them having a wide search space that would need to be narrowed down. Non openness is also something that is part of, or an implication of, an ability that is standardly assumed in a certain class of AGIs, namely those with human level linguistic ability. To understand a sentence correctly is to narrow down its space of possible meanings.
Only AIXIs have an own oneness that would need additional measures to narrow them down.
They are no threat at the moment, and the easy answer to AI safety might be to not use them....like we don’t build hydrogen filled airships.
Can we note you’ve moved from “the problem is not open ended” to “the AGI is programmed in such a way that the problem is not open ended”, which is the whole of the problem.
In a sense. Non openness is a non problem for fairly limited AIs, because their limitations prevent them having a wide search space that would need to be narrowed down. Non openness is also something that is part of, or an implication of, an ability that is standardly assumed in a certain class of AGIs, namely those with human level linguistic ability. To understand a sentence correctly is to narrow down its space of possible meanings.
Only AIXIs have an own oneness that would need additional measures to narrow them down.
They are no threat at the moment, and the easy answer to AI safety might be to not use them....like we don’t build hydrogen filled airships.