With a big fast hardware base (relative to the program) and AI sophisticated enough to keep learning without continual human guidance and grok AI theory, gains comparable to the history of AI so far in a few hours or weeks would be reasonable from speedup alone.
Sure. But the end result of all that might end up be very small improvements in actual algorithmic efficiency. It might turn out for example that the best factoring algorithms are of the same order as the current sieves, and it might turn out that after thousands of additional hours of comp sci work the end result is a very difficult proof of that. If the complexity hierarchy doesn’t collapse in a strong sense, then even with lots of resources to spend just thinking about algorithms, the AI won’t improve the algorithms by that much in terms of actual speed, because they can’t be.
But the end result of all that might end up be very small improvements in actual algorithmic efficiency. It might turn out for example that the best factoring algorithms are of the same order as the current sieves, and it might turn out that after thousands of additional hours of comp sci work the end result is a very difficult proof of that.
Yes, I agreed that we should expect this on some problems, but that we don’t have reason to expect it across most problems, weighted by practical impact. Especially so for the specific skills where humans greatly outperform computers, skills with great relevance for strategic advantage.
Do you think we have much reason to expect that the algorithms underlying human performance (in the problems where humans greatly outperform today’s AI) are mostly near optimal at what they do, such that AIs won’t have any areas of huge advantage to leverage?
Yes, I agreed that we should expect this on some problems, but that we don’t have reason to expect it across most problems, weighted by practical impact, especially for the specific skills where humans greatly outperform computers, skills with great relevance for strategic advantage.
I agree with the human skills. I disagree with the claim for problems by practical impact. For example, many practical problems turn out in the general cases to be NP hard or NP complete, or are believed to be not solvable in polynomial time. Examples include the traveling salesman and graph coloring both of which come up very frequently in practical applications across a wide range of contexts.
Do you think we have much reason to expect that the algorithms underlying human performance (in the problems where humans greatly outperform today’s AI) are mostly near optimal at what they do, such that AIs won’t have any areas of huge advantage to leverage?
Many of those algorithms might be able to be optimized a lot. There’s an argument that we should expect humans to be near optimal (since we’ve spent a million years evolving to be really good at face recognition, understanding other human minds etc.) and our neural nets are trained from a very young age to do this. But there’s a lot of evidence that we are in fact suboptimal. Evidence for this includes Dunbar’s number and a lot of classical cognitive biases such as the illusion of transparency.
But a lot of those aren’t that relevant to fooming. Most humans can do facial recognition pretty fast and pretty reliably. If an AI can do that with a much tinier set of resources, more quickly and more reliably, that’s really neat but that isn’t going to help it go foom.
Sure. But the end result of all that might end up be very small improvements in actual algorithmic efficiency. It might turn out for example that the best factoring algorithms are of the same order as the current sieves, and it might turn out that after thousands of additional hours of comp sci work the end result is a very difficult proof of that. If the complexity hierarchy doesn’t collapse in a strong sense, then even with lots of resources to spend just thinking about algorithms, the AI won’t improve the algorithms by that much in terms of actual speed, because they can’t be.
Yes, I agreed that we should expect this on some problems, but that we don’t have reason to expect it across most problems, weighted by practical impact. Especially so for the specific skills where humans greatly outperform computers, skills with great relevance for strategic advantage.
Do you think we have much reason to expect that the algorithms underlying human performance (in the problems where humans greatly outperform today’s AI) are mostly near optimal at what they do, such that AIs won’t have any areas of huge advantage to leverage?
I agree with the human skills. I disagree with the claim for problems by practical impact. For example, many practical problems turn out in the general cases to be NP hard or NP complete, or are believed to be not solvable in polynomial time. Examples include the traveling salesman and graph coloring both of which come up very frequently in practical applications across a wide range of contexts.
Many of those algorithms might be able to be optimized a lot. There’s an argument that we should expect humans to be near optimal (since we’ve spent a million years evolving to be really good at face recognition, understanding other human minds etc.) and our neural nets are trained from a very young age to do this. But there’s a lot of evidence that we are in fact suboptimal. Evidence for this includes Dunbar’s number and a lot of classical cognitive biases such as the illusion of transparency.
But a lot of those aren’t that relevant to fooming. Most humans can do facial recognition pretty fast and pretty reliably. If an AI can do that with a much tinier set of resources, more quickly and more reliably, that’s really neat but that isn’t going to help it go foom.