I notice I fail to see a way to create super-humanly Tall Intelligence that would not quite quickly become very Wide. either the Tall Intelligence would be able to reach sideways, and broaden itself, or it would be trivial to stack a whole bunch of Tall Intelligences working as a Tall Phalanx and thus be functionally Wide.
Wide Artificial Intelligence is simple, hence why we already have it. Making it Wider is easy, just time consuming and lots of legwork. Having a Taller-Than-Human Intelligence would make the process much easier, because (to milk the metaphor dry) A Tall Intelligence can see further, and learn how to reach Wider.
I agree, there is some innate “Angle of repose” (continuing with tall/wide analogy) present in the structure of the knowledge itself. The higher the concept we operate the more “base” knowledge it needs to support. So they aren’t completely independet.
Mostly was thinking about how I can call these “axii” in conversation so that it’s understandable what I’m talking about.
might not be the best approach, but I saw people use the term Artificial Cleverness for Wide but Short AI. Things like CHatGPT perfectly fit the bill, it is “clever” (quick but superficial at analysis of a broad set of data), but not “Tall” at all.
I notice I fail to see a way to create super-humanly Tall Intelligence that would not quite quickly become very Wide. either the Tall Intelligence would be able to reach sideways, and broaden itself, or it would be trivial to stack a whole bunch of Tall Intelligences working as a Tall Phalanx and thus be functionally Wide.
Wide Artificial Intelligence is simple, hence why we already have it. Making it Wider is easy, just time consuming and lots of legwork. Having a Taller-Than-Human Intelligence would make the process much easier, because (to milk the metaphor dry) A Tall Intelligence can see further, and learn how to reach Wider.
Alpha Go is superhuman in Go an Go only. It will also be possible to make an AI that is very good in math, but has no idea about the real world.
I agree, there is some innate “Angle of repose” (continuing with tall/wide analogy) present in the structure of the knowledge itself. The higher the concept we operate the more “base” knowledge it needs to support. So they aren’t completely independet.
Mostly was thinking about how I can call these “axii” in conversation so that it’s understandable what I’m talking about.
might not be the best approach, but I saw people use the term Artificial Cleverness for Wide but Short AI. Things like CHatGPT perfectly fit the bill, it is “clever” (quick but superficial at analysis of a broad set of data), but not “Tall” at all.