(Which reminds me: we don’t talk anywhere near enough about computational complexity on LW for my tastes. What’s up with that? An agent can’t do anything right if it can’t compute what “right” means before the Sun explodes.)
I agree with this concern (and my professional life is primarily focused on heuristic optimization methods, where computational complexity is huge).
I suspect it doesn’t get talked about much here because of the emphasis on intelligence explosion, missing AI insights, provably friendly, normative rationality, and there not being much to say. (The following are not positions I necessarily endorse.) An arbitrarily powerful intelligence might not care much about computational complexity (though it’s obviously important if you still care about marginal benefit and marginal cost at that level of power). Until we understand what’s necessary for AGI, the engineering details separating polynomial, exponential, and totally intractable algorithms might not be very important. It’s really hard to prove how well heuristics do at optimization, let alone robustness. The Heuristics and Biases literature focuses on areas where it’s easy to show humans aren’t using the right math, rather than how best to think given the hardware you have, and some of that may be deeply embedded in the LW culture.
I think that there’s a strong interest in prescriptive rationality, though, and if you have something to say on that topic or computational complexity, I’m interested in hearing it.
I agree with this concern (and my professional life is primarily focused on heuristic optimization methods, where computational complexity is huge).
I suspect it doesn’t get talked about much here because of the emphasis on intelligence explosion, missing AI insights, provably friendly, normative rationality, and there not being much to say. (The following are not positions I necessarily endorse.) An arbitrarily powerful intelligence might not care much about computational complexity (though it’s obviously important if you still care about marginal benefit and marginal cost at that level of power). Until we understand what’s necessary for AGI, the engineering details separating polynomial, exponential, and totally intractable algorithms might not be very important. It’s really hard to prove how well heuristics do at optimization, let alone robustness. The Heuristics and Biases literature focuses on areas where it’s easy to show humans aren’t using the right math, rather than how best to think given the hardware you have, and some of that may be deeply embedded in the LW culture.
I think that there’s a strong interest in prescriptive rationality, though, and if you have something to say on that topic or computational complexity, I’m interested in hearing it.