We also could live now in such hellworld but don’t know it.
Indeed. But you’ve just described it to us ^_^
What I’m mainly asking is “if we end up in world W, and no honest AI can describe to us how this might be a hellworld, is it automatically not a hellworld?”
It looks like examples are not working here, as any example is an explanation, so it doesn’t count :)
But in some sense it could be similar to the Godel theorem: there are true propositions which can’t be proved by AI (and explanation could be counted as a type of prove).
Ok, another example: there are bad pieces of art, I know it, but I can’t explain why they are bad in formal language.
Indeed. But you’ve just described it to us ^_^
What I’m mainly asking is “if we end up in world W, and no honest AI can describe to us how this might be a hellworld, is it automatically not a hellworld?”
It looks like examples are not working here, as any example is an explanation, so it doesn’t count :)
But in some sense it could be similar to the Godel theorem: there are true propositions which can’t be proved by AI (and explanation could be counted as a type of prove).
Ok, another example: there are bad pieces of art, I know it, but I can’t explain why they are bad in formal language.
That’s what I’m fearing, so I’m trying to see if the concept makes sense.