Without knowing the actual optimal curve of computational cost per “unit” of intelligence (and WTF that even means), it’s not too useful to know whether it’s polynomial or not. There are LOTS of np-hard problems that humans have “solved” numerically or partially for real-world uses, at scales that are economically important. They don’t scale perfectly, and they’re often not provably optimal, but they work.
It’s hard to figure out the right metrics for world-modeling-and-goal-optimization that would prevent AGI from taking over or destroying most value for biological agents, and even harder to have any clue whether the underlying constraint being NP-hard matters at all in the next milleneum. It probably WILL matter at some point, but it could be 3 or 4 takeovers or alien-machine-intelligence-discoveries away.
There are LOTS of np-hard problems that humans have “solved” numerically or partially for real-world uses, at scales that are economically important. They don’t scale perfectly, and they’re often not provably optimal, but they work.
These are part of the considerations I would address when I get around to writing the post.
Empirical probability distributions over inputs
Weighting problems by their economic importance
Approximate solutions
Probabilistic solutions
Etc.
All complicate the analysis (you’d probably want a framework for determining complexity that natively handled probabilistic/approximate solutions, so maybe input size in bits and “work done by optimisation in bits”), but even with all these considerations, you can still define a coherent notion of “time complexity”.
Without knowing the actual optimal curve of computational cost per “unit” of intelligence (and WTF that even means), it’s not too useful to know whether it’s polynomial or not. There are LOTS of np-hard problems that humans have “solved” numerically or partially for real-world uses, at scales that are economically important. They don’t scale perfectly, and they’re often not provably optimal, but they work.
It’s hard to figure out the right metrics for world-modeling-and-goal-optimization that would prevent AGI from taking over or destroying most value for biological agents, and even harder to have any clue whether the underlying constraint being NP-hard matters at all in the next milleneum. It probably WILL matter at some point, but it could be 3 or 4 takeovers or alien-machine-intelligence-discoveries away.
These are part of the considerations I would address when I get around to writing the post.
Empirical probability distributions over inputs
Weighting problems by their economic importance
Approximate solutions
Probabilistic solutions
Etc.
All complicate the analysis (you’d probably want a framework for determining complexity that natively handled probabilistic/approximate solutions, so maybe input size in bits and “work done by optimisation in bits”), but even with all these considerations, you can still define a coherent notion of “time complexity”.
Just messaged my lecturer, I’ll try and see if I can get permission for such a project.