I think the focus on quantitative vs qualitative is a distraction. If an AI does become powerful enough to destroy us, it won’t matter whether that’s qualitatively more powerful vs ‘just’ quantitatively.
I would state it slightly differently by saying: DragonGod’s original question is about whether an AGI can think a thought that no human could ever understand, not in a billion years, not ever. DragonGod is entitled to ask that question—I mean, there’s no rules, people can ask whatever question they want! But we’re equally entitled to point out that it’s not an important question for AGI risk, or really for any other practical purpose that I can think of.
For my part, I have no idea what the answer to DragonGod’s original question is. ¯\_(ツ)_/¯
I think the focus on quantitative vs qualitative is a distraction. If an AI does become powerful enough to destroy us, it won’t matter whether that’s qualitatively more powerful vs ‘just’ quantitatively.
I would state it slightly differently by saying: DragonGod’s original question is about whether an AGI can think a thought that no human could ever understand, not in a billion years, not ever. DragonGod is entitled to ask that question—I mean, there’s no rules, people can ask whatever question they want! But we’re equally entitled to point out that it’s not an important question for AGI risk, or really for any other practical purpose that I can think of.
For my part, I have no idea what the answer to DragonGod’s original question is. ¯\_(ツ)_/¯