A huge amount surely, at least for many problems. There’s no guarantee that any particular problem will be subject to vast further software improvements, though.
Can you expand on this? I suspect this is true for some classes of problems, but I’m sufficiently uncertain that I’m intrigued by your claim about this being “surely” going to happen.
A lot of existing improvement trends would have to suddenly stop, along with the general empirical trend of continued software progress. On many applications we are well short of the performance of biological systems, and those biological systems show large internal variation (e.g. the human IQ distribution) without an abrupt “wall” visible, indicating that machines could go further (as they already have on many problems).
I’m not quite sure software is well short of the performance of biological systems in terms of what software can do with given number of operations per second. Consider the cat image recognition: Google’s system has miniscule computing power comparing to human visual cortex, and performs accordingly (badly).
What I suspect though, is that the greatest advances in speeding up technological progress, would come from better algorithm that works on well defined problems like making better transistors—something where even the humans make breakthroughs not by verbally doing some i think therefore i am philosophy in their heads but by either throwing science at the wall and seeing what sticks, or by imagining it in their heads, visually, trying to imitate the non-intelligent simulator. Likewise for the automated software development; so much of the thought that human does to do such tasks is, really, unrelated to this human capacity to see meaning and purpose to life, or the symbol grounding or anything of this kind that makes us fearsome, dangerous, survival machines—things you don’t need to make for automated programming software.
Why would you expect the opposite? Tight lower bounds have not been proven for most problems, much less algorithms produced which reach such bounds, and even in the rare cases where they have been, then the constant factors could well be substantially improved. And then there are hardware improvements like ASICs, which are no joking matter. I collected just a few possibilities (since it’s not a main area of interest for me as it seems so obvious that there are many improvements left) in http://www.gwern.net/Aria%27s%20past,%20present,%20and%20future#fn3
I’m not sure really. The conjectured limits in some cases are strong. Computational complexity is unfortunately an area where we have a vast difference between what we suspect and what we can prove. And the point about improvements in constant factors is very well taken- it is an area that’s often underappreciated.
But at the same time, these are reasons to suspect that improvements will exist. Carl’s comment was about improvement “surely” occurring which seems like a much stronger claim. Moreover, in this context, while hardware improvements are likely to happen, they aren’t relevant to the claim in question which is about software. But overall, this may be a language issue, and I may simply be interpreting “surely” as a stronger statement than it is intended.
Given the sheer economic value of improvements, is there any reason at all to expect optimization/research to just stop, short of a global disaster? (And even then, depending on the disaster...)
No, not particularly that I can think of. The only examples where people stop working on optimizing a problem is when the problem has become so easy that it simply doesn’t matter to optimize further, but such examples are rare, and even in those sorts, further optimization does occur just at a slower place.
How much could be gained from more efficient programs, even if hardware improvements stall out?
A huge amount surely, at least for many problems. There’s no guarantee that any particular problem will be subject to vast further software improvements, though.
Can you expand on this? I suspect this is true for some classes of problems, but I’m sufficiently uncertain that I’m intrigued by your claim about this being “surely” going to happen.
A lot of existing improvement trends would have to suddenly stop, along with the general empirical trend of continued software progress. On many applications we are well short of the performance of biological systems, and those biological systems show large internal variation (e.g. the human IQ distribution) without an abrupt “wall” visible, indicating that machines could go further (as they already have on many problems).
I’m not quite sure software is well short of the performance of biological systems in terms of what software can do with given number of operations per second. Consider the cat image recognition: Google’s system has miniscule computing power comparing to human visual cortex, and performs accordingly (badly).
What I suspect though, is that the greatest advances in speeding up technological progress, would come from better algorithm that works on well defined problems like making better transistors—something where even the humans make breakthroughs not by verbally doing some i think therefore i am philosophy in their heads but by either throwing science at the wall and seeing what sticks, or by imagining it in their heads, visually, trying to imitate the non-intelligent simulator. Likewise for the automated software development; so much of the thought that human does to do such tasks is, really, unrelated to this human capacity to see meaning and purpose to life, or the symbol grounding or anything of this kind that makes us fearsome, dangerous, survival machines—things you don’t need to make for automated programming software.
Why would you expect the opposite? Tight lower bounds have not been proven for most problems, much less algorithms produced which reach such bounds, and even in the rare cases where they have been, then the constant factors could well be substantially improved. And then there are hardware improvements like ASICs, which are no joking matter. I collected just a few possibilities (since it’s not a main area of interest for me as it seems so obvious that there are many improvements left) in http://www.gwern.net/Aria%27s%20past,%20present,%20and%20future#fn3
I’m not sure really. The conjectured limits in some cases are strong. Computational complexity is unfortunately an area where we have a vast difference between what we suspect and what we can prove. And the point about improvements in constant factors is very well taken- it is an area that’s often underappreciated.
But at the same time, these are reasons to suspect that improvements will exist. Carl’s comment was about improvement “surely” occurring which seems like a much stronger claim. Moreover, in this context, while hardware improvements are likely to happen, they aren’t relevant to the claim in question which is about software. But overall, this may be a language issue, and I may simply be interpreting “surely” as a stronger statement than it is intended.
Given the sheer economic value of improvements, is there any reason at all to expect optimization/research to just stop, short of a global disaster? (And even then, depending on the disaster...)
No, not particularly that I can think of. The only examples where people stop working on optimizing a problem is when the problem has become so easy that it simply doesn’t matter to optimize further, but such examples are rare, and even in those sorts, further optimization does occur just at a slower place.