Right, but one of the reasons for the curve of capability is the general version of Amdahl’s law. A particular new algorithm may make a particular computational task much easier, but if that task was only 5% of the total problem you are trying to solve, then even an infinite speedup on that task will only give you 5% overall improvement. The upshot is that new algorithms only make enough of a splash to be heard of by computer scientists, whereas the proverbial man on the street has heard of Moore’s Law.
But I will grant you that a constructive proof of P=NP would… be interesting. I don’t know that it would enable AI to go foom (that would still have other difficulties to overcome), but it would be of such wide applicability that my argument from the curve of capability against AI foom, would be invalidated.
I share the consensus view that P=NP doesn’t seem to be on the cards, but I agree it would be better to have a proof of P!=NP.
There are plenty of instances where an algorithmic breakthrough didn’t just apply to 5% of a problem, but the major part of it. My field (applied math) is riddled with such breakthroughs.
Yeah, but it’s fractal. For every such problem, there is a bigger problem such that the original one was a 5% subgoal. This is one of the reasons why the curve of capability manifests so consistently: you always find yourself hitting the general version of Amdahl’s law in the context of a larger problem
For every such problem, there is a bigger problem such that the original one was a 5% subgoal.
Sure, you can always cook up some ad hoc problem such that my perfect 100% solution to problem A is just a measly 5% subcomponent of problem B . That doesn’t change the fact that I’ve solved problem A, and all the ramifications that come along with it. You’re just relabeling things to automate a moving goal post. Luckily, an algorithm by any other name would still smell as sweet.
I think the point he was trying to make was that the set of expanding subgoals that an AI would have to make its way through would be sufficient to slow it down to within the exponential we’ve all been working with.
Phrased this way, however, it’s a much stronger point and I think it would require more discussion to be meaningful.
Bear in mind that if we zoom in to sufficiently fine granularity, we can get literally infinite speedup—some of my most productive programming days have been when I’ve found a way to make a section of code unnecessary, and delete it entirely.
To show this part of my argument is not infinitely flexible, I will say one algorithmic breakthrough that would invalidate it, would be a constructive proof of P=NP (or a way to make quantum computers solve NP-complete problems in polynomial time—I’m told the latter has been proven impossible, but I don’t know how certain it is that there aren’t any ways around that).
There are no known strong results concerning the relationship between BQP (roughly the analogue of P for quantum computers) and NP. There is strong consensus that BQP does not contain NP, but it is not as strong as the overwhelming consensus that P != NP.
Right, but one of the reasons for the curve of capability is the general version of Amdahl’s law. A particular new algorithm may make a particular computational task much easier, but if that task was only 5% of the total problem you are trying to solve, then even an infinite speedup on that task will only give you 5% overall improvement. The upshot is that new algorithms only make enough of a splash to be heard of by computer scientists, whereas the proverbial man on the street has heard of Moore’s Law.
May I suggest that the reasons that the proverbial person on the street has heard of Moore’s Law is more due to that 1) It is easier to understand 2) It has more of a visibly obvious impact on their lives?
Edit: Also, regarding 5%, sometimes the entire problem is just in the algorithm. For example, in the one I gave, factoring, the entire problem is can you factor integers quickly.
Right, but one of the reasons for the curve of capability is the general version of Amdahl’s law. A particular new algorithm may make a particular computational task much easier, but if that task was only 5% of the total problem you are trying to solve, then even an infinite speedup on that task will only give you 5% overall improvement. The upshot is that new algorithms only make enough of a splash to be heard of by computer scientists, whereas the proverbial man on the street has heard of Moore’s Law.
But I will grant you that a constructive proof of P=NP would… be interesting. I don’t know that it would enable AI to go foom (that would still have other difficulties to overcome), but it would be of such wide applicability that my argument from the curve of capability against AI foom, would be invalidated.
I share the consensus view that P=NP doesn’t seem to be on the cards, but I agree it would be better to have a proof of P!=NP.
There are plenty of instances where an algorithmic breakthrough didn’t just apply to 5% of a problem, but the major part of it. My field (applied math) is riddled with such breakthroughs.
Yeah, but it’s fractal. For every such problem, there is a bigger problem such that the original one was a 5% subgoal. This is one of the reasons why the curve of capability manifests so consistently: you always find yourself hitting the general version of Amdahl’s law in the context of a larger problem
Sure, you can always cook up some ad hoc problem such that my perfect 100% solution to problem A is just a measly 5% subcomponent of problem B . That doesn’t change the fact that I’ve solved problem A, and all the ramifications that come along with it. You’re just relabeling things to automate a moving goal post. Luckily, an algorithm by any other name would still smell as sweet.
I think the point he was trying to make was that the set of expanding subgoals that an AI would have to make its way through would be sufficient to slow it down to within the exponential we’ve all been working with.
Phrased this way, however, it’s a much stronger point and I think it would require more discussion to be meaningful.
Bear in mind that if we zoom in to sufficiently fine granularity, we can get literally infinite speedup—some of my most productive programming days have been when I’ve found a way to make a section of code unnecessary, and delete it entirely.
To show this part of my argument is not infinitely flexible, I will say one algorithmic breakthrough that would invalidate it, would be a constructive proof of P=NP (or a way to make quantum computers solve NP-complete problems in polynomial time—I’m told the latter has been proven impossible, but I don’t know how certain it is that there aren’t any ways around that).
There are no known strong results concerning the relationship between BQP (roughly the analogue of P for quantum computers) and NP. There is strong consensus that BQP does not contain NP, but it is not as strong as the overwhelming consensus that P != NP.
Presumably because P = NP would imply that NP is contained in BQP, so you can’t believe the first of your statements without believing the second.
It’s not even known if NP contains BQP?
No. The best we can do is that both contain BPP and are contained in PP, as far as I recall.
And there exist oracles relative to which BQP is not contained in MA (which contains NP).
May I suggest that the reasons that the proverbial person on the street has heard of Moore’s Law is more due to that 1) It is easier to understand 2) It has more of a visibly obvious impact on their lives?
Edit: Also, regarding 5%, sometimes the entire problem is just in the algorithm. For example, in the one I gave, factoring, the entire problem is can you factor integers quickly.