I hadn’t fully appreciated to difficultly that could result from AIs having alien concepts, so thanks for bringing it up.
However, it seems to me that this would not be a big problem, provided the AI is still interpretable. I’ll provide two ways to handle this.
For one, you could potentially translate the human concepts you care about into statements using the AI’s concepts. Even if the AI doesn’t use the same concepts people do, AIs are still incentivized to form a detailed model of the world. If you can have access to all the AI’s world model, but still can’t figure out basic things like if the model means the world gets destroyed or the AI takes over the world, then that model doesn’t seem very interperable. So I’m skeptical that this would really be a problem.
But, if it is, it seems to me that there’s a way to get the AI to have non-alien concepts.
In a comment with another person, made a modification to the system by saying that the people outputting utilities should be able to refuse to output one in a given query, for example because the situation is too complicated or to vague for humans to understand that desirability of. This could potentially allow for people to avoid having the AI from having very aliens concepts.
To deal with alien concepts, you can just have the people refuse to provide an answer to the utility of a possible for description if the description is described. This way, the AI would need to come up with sufficiently non-alien concepts before it can understand the utility of things. The AI would have to come up with reasonably non-alien concepts in order to get any of its calls to its utility function to work.
I hadn’t fully appreciated to difficultly that could result from AIs having alien concepts, so thanks for bringing it up.
However, it seems to me that this would not be a big problem, provided the AI is still interpretable. I’ll provide two ways to handle this.
For one, you could potentially translate the human concepts you care about into statements using the AI’s concepts. Even if the AI doesn’t use the same concepts people do, AIs are still incentivized to form a detailed model of the world. If you can have access to all the AI’s world model, but still can’t figure out basic things like if the model means the world gets destroyed or the AI takes over the world, then that model doesn’t seem very interperable. So I’m skeptical that this would really be a problem.
But, if it is, it seems to me that there’s a way to get the AI to have non-alien concepts.
In a comment with another person, made a modification to the system by saying that the people outputting utilities should be able to refuse to output one in a given query, for example because the situation is too complicated or to vague for humans to understand that desirability of. This could potentially allow for people to avoid having the AI from having very aliens concepts.
To deal with alien concepts, you can just have the people refuse to provide an answer to the utility of a possible for description if the description is described. This way, the AI would need to come up with sufficiently non-alien concepts before it can understand the utility of things. The AI would have to come up with reasonably non-alien concepts in order to get any of its calls to its utility function to work.