I agree, there is some innate “Angle of repose” (continuing with tall/wide analogy) present in the structure of the knowledge itself. The higher the concept we operate the more “base” knowledge it needs to support. So they aren’t completely independet.
Mostly was thinking about how I can call these “axii” in conversation so that it’s understandable what I’m talking about.
might not be the best approach, but I saw people use the term Artificial Cleverness for Wide but Short AI. Things like CHatGPT perfectly fit the bill, it is “clever” (quick but superficial at analysis of a broad set of data), but not “Tall” at all.
I agree, there is some innate “Angle of repose” (continuing with tall/wide analogy) present in the structure of the knowledge itself. The higher the concept we operate the more “base” knowledge it needs to support. So they aren’t completely independet.
Mostly was thinking about how I can call these “axii” in conversation so that it’s understandable what I’m talking about.
might not be the best approach, but I saw people use the term Artificial Cleverness for Wide but Short AI. Things like CHatGPT perfectly fit the bill, it is “clever” (quick but superficial at analysis of a broad set of data), but not “Tall” at all.