I would state it slightly differently by saying: DragonGod’s original question is about whether an AGI can think a thought that no human could ever understand, not in a billion years, not ever. DragonGod is entitled to ask that question—I mean, there’s no rules, people can ask whatever question they want! But we’re equally entitled to point out that it’s not an important question for AGI risk, or really for any other practical purpose that I can think of.
For my part, I have no idea what the answer to DragonGod’s original question is. ¯\_(ツ)_/¯
I would state it slightly differently by saying: DragonGod’s original question is about whether an AGI can think a thought that no human could ever understand, not in a billion years, not ever. DragonGod is entitled to ask that question—I mean, there’s no rules, people can ask whatever question they want! But we’re equally entitled to point out that it’s not an important question for AGI risk, or really for any other practical purpose that I can think of.
For my part, I have no idea what the answer to DragonGod’s original question is. ¯\_(ツ)_/¯