I don’t think this problem is very hard to resolve. If an AI is programmed to make sense of natural-language concepts like “chocolate bar”, there should be a mechanism to acquire a best-effort understanding. So you could rewrite the motivation as:
“create things which the maximum amount of people understand to be a chocolate bar”
or alternatively:
“create things which the programmer is most likely to have understood to be a chocolate bar”.
I don’t think this problem is very hard to resolve. If an AI is programmed to make sense of natural-language concepts like “chocolate bar”, there should be a mechanism to acquire a best-effort understanding. So you could rewrite the motivation as:
“create things which the maximum amount of people understand to be a chocolate bar”
or alternatively:
“create things which the programmer is most likely to have understood to be a chocolate bar”.