We can point to areas of chess like the endgame databases, which are just plain inscrutable
I think there isa key difference in places where the answers are just exhaustive search, rather than more intelligence—AI isn’t better at that than humans, and from the little I understand, AI doesn’t outperform in endgames (compared to their overperformance in general) via better policy engines, they do it via direct memorization or longer lookahead.
The difference here matters for other domains with far larger action spaces even more, since the exponential increase makes intelligence less marginally valuable at finding increasingly rare solutions. The design space for viruses is huge, and the design space for nanomachines using arbitrary configurations is even larger. If move-37-like intuitions are common, they will be able to do things humans cannot understand, whereas if it’s more like chess endgames, they will need to search an exponential space in ways that are infeasible for them.
This relates closely to a folk theorem about NP-complete problems, where exponential problems are approximately solvable with greedy algorithms in nlogn or n^2 time, and TSP is NP complete but actual salesmen find sufficiently efficient routes easily.
But what part are you unsure about?
Yeah, on reflection, the music analogy wasn’t a great one. I am not concerned that pattern creation that we can’t intuit could exist—humans can do that as well. (For example, it’s easy to make puzzles no-one can solve.) The question is whether important domains are amenable to kinds of solutions that ASI can understand robustly in ways humans cannot. That is, can ASI solve “impossible” problems?
One specific concerning difference is whether ASI could play perfect social 12-D chess by being a better manipulator, despite all of the human-experienced uncertainties, and engineer arbitrary outcomes in social domains. There clearly isn’t a feasible search strategy with exact evaluation, but if it is far smarter than “human-legible ranges” of thinking, it might be possible.
This isn’t jut relevant for AI risk, of course. Another area is biological therapies, where, for example, it seems likely that curing or reversing aging requires the same sort of brilliant insight into insane complexity, figuring out whether there would be long term or unexpected out of distribution impacts years later, without actually conducting multi-decade large scale trials.
If AI systems can make 500 years of progress before we notice it’s uncontrolled, it’s already assuming it’s a insanely strong superintelligence.
Probably, if it’s of a type we can imagine and is comprehensible in those terms—but that’s assuming the conclusion! As Gwern noted, we can’t understand chess endgames. Similarly, in the case of a strong ASI, the ASI- created probe or cure could look more like a random set of actions that aren’t explainable in our terms which cause the outcome than it does like an engineered / purpose driven system that is explainable at all.