In the last five decades humans have created algorithms for solving many problems that had previously been intractable, and given orders of magnitude improvement on others. Many of these have come from math/compsci innovation that was not particularly hardware-limited, i.e. if you had the same (or a larger/smarter-on-average/better-organized) research community but with frozen primitive hardware many of the insights would have been found.
Yes. I agree strongly with this. One major thing we’ve found in the last few years is just that P turns out to be large and a lot of problems have turned out to be in there that were not obviously so. If one asked people in the early 1970s whether they would expect primality testing to be in P they would probably say no. Moreover, some practical problems have simply had their implementations improved a lot even as the space and time of the algorithms remain in the same complexity classes.
There are also problems where we are clearly far from the reachable frontier (whether that is near-optimal performance, or just the best that can be done given resource constraints).
Can you expand on this? I’m not sure I follow.
So long as enough domains have room to grow, they can translate into strategic advantage even if others are stable
Sure. But that doesn’t say much about how fast that growth will occur. The standard hard-take off narratives have the AI becoming functionally in control of its light cone in a matter of hours or at most weeks. I agree that there is likely a lot of room for improvement in cognitive capability but the issue in this context is whether it is likely for that sort of improvement to occur quickly.
linear gains in chess performance translate into an exponential drop-off in the number of potential human challengers, etc.
I agree with your other examples. And it is a valid point. I don’t think that a strong form P !=NP makes fooming impossible, just that it makes it much less likely. The chess example however has an issue that needs to be nitpicked. As I understand it, this isn’t really about linear gains in chess translating into exponential drop off but rather an artifact of the Elo system which sort of requires that linear increase corresponds to quick improvement.
The standard hard-take off narratives have the AI becoming functionally in control of its light cone in a matter of hours or at most weeks.
The human field of AI is about half a million hours old, computer elements can operate at a million times human speed (given enough parallel elements). To the extent that many of the important discoveries were not limited by chip speeds but by the pace of CS, math, and AI researchers’ thinking (with most of the work done by some thousands of people who spent much of that time eating, sleeping, goofing off, getting up to speed on existing knowledge in the field).
With a big fast hardware base (relative to the program) and AI sophisticated enough to keep learning without continual human guidance and grok AI theory, gains comparable to the history of AI so far in a few hours or weeks would be reasonable from speedup alone.
I agree that one could have scenarios in which there are AI programs with humanlike capacities that are not yet capable of such development (e.g. a super-bloated system running on massive server farms). However, they tend to involve AI development happening very surprisingly quickly, and don’t seem stable for long (bloated implementations can be made more efficient, with strong positive feedback in the improvement, and superhuman hardware will come soon after powerful AI if not before).
an artifact of the Elo system which sort of requires that linear increase corresponds to quick improvement
I agree that this is true, but people often cite chess as an example where exponential hardware increases in the same algorithms led to only linear (Elo) gains.
With a big fast hardware base (relative to the program) and AI sophisticated enough to keep learning without continual human guidance and grok AI theory, gains comparable to the history of AI so far in a few hours or weeks would be reasonable from speedup alone.
Sure. But the end result of all that might end up be very small improvements in actual algorithmic efficiency. It might turn out for example that the best factoring algorithms are of the same order as the current sieves, and it might turn out that after thousands of additional hours of comp sci work the end result is a very difficult proof of that. If the complexity hierarchy doesn’t collapse in a strong sense, then even with lots of resources to spend just thinking about algorithms, the AI won’t improve the algorithms by that much in terms of actual speed, because they can’t be.
But the end result of all that might end up be very small improvements in actual algorithmic efficiency. It might turn out for example that the best factoring algorithms are of the same order as the current sieves, and it might turn out that after thousands of additional hours of comp sci work the end result is a very difficult proof of that.
Yes, I agreed that we should expect this on some problems, but that we don’t have reason to expect it across most problems, weighted by practical impact. Especially so for the specific skills where humans greatly outperform computers, skills with great relevance for strategic advantage.
Do you think we have much reason to expect that the algorithms underlying human performance (in the problems where humans greatly outperform today’s AI) are mostly near optimal at what they do, such that AIs won’t have any areas of huge advantage to leverage?
Yes, I agreed that we should expect this on some problems, but that we don’t have reason to expect it across most problems, weighted by practical impact, especially for the specific skills where humans greatly outperform computers, skills with great relevance for strategic advantage.
I agree with the human skills. I disagree with the claim for problems by practical impact. For example, many practical problems turn out in the general cases to be NP hard or NP complete, or are believed to be not solvable in polynomial time. Examples include the traveling salesman and graph coloring both of which come up very frequently in practical applications across a wide range of contexts.
Do you think we have much reason to expect that the algorithms underlying human performance (in the problems where humans greatly outperform today’s AI) are mostly near optimal at what they do, such that AIs won’t have any areas of huge advantage to leverage?
Many of those algorithms might be able to be optimized a lot. There’s an argument that we should expect humans to be near optimal (since we’ve spent a million years evolving to be really good at face recognition, understanding other human minds etc.) and our neural nets are trained from a very young age to do this. But there’s a lot of evidence that we are in fact suboptimal. Evidence for this includes Dunbar’s number and a lot of classical cognitive biases such as the illusion of transparency.
But a lot of those aren’t that relevant to fooming. Most humans can do facial recognition pretty fast and pretty reliably. If an AI can do that with a much tinier set of resources, more quickly and more reliably, that’s really neat but that isn’t going to help it go foom.
I agree that one could have scenarios in which there are AI programs with humanlike capacities that are not yet capable of such development (e.g. a super-bloated system running on massive server farms). However, they tend to involve AI development happening very surprisingly quickly, and don’t seem stable for long (bloated implementations can be made more efficient, with strong positive feedback in the improvement, and superhuman hardware will come soon after powerful AI if not before).
I’m not sure how to interpret what you’re saying. You say:
they tend to involve AI development happening very surprisingly quickly
which sounds to me like a summary of long experience. But you also seem to be talking about a scenario which you cannot possibly have experienced even once. So, I’m not sure what you’re saying.
I’m saying that in my experience of people working out consistent scenarios that involve AI development with sustained scarcity, the scenarios offered usually involve the development of human-level AI early, before hardware can advance much further.
I agree that this is true, but people often cite chess as an example where exponential hardware increases in the same algorithms led to only linear (Elo) gains.
This is people being stupid in one direction. This isn’t a good reason to be stupid in another direction. The simplest explanation is the Elo functions as something like a a log scale of actual ability.
Just to clarify, what do you mean by “actual ability’? In something like the 100 m dash, I can think of “actual ability” as finish time. We could construct an Elo rating based on head-to-head races of thousands of sprinters, and it wouldn’t be a log scale of finish times. Do you just mean percentile in the human distribution?
Yes. I agree strongly with this. One major thing we’ve found in the last few years is just that P turns out to be large and a lot of problems have turned out to be in there that were not obviously so. If one asked people in the early 1970s whether they would expect primality testing to be in P they would probably say no. Moreover, some practical problems have simply had their implementations improved a lot even as the space and time of the algorithms remain in the same complexity classes.
Can you expand on this? I’m not sure I follow.
Sure. But that doesn’t say much about how fast that growth will occur. The standard hard-take off narratives have the AI becoming functionally in control of its light cone in a matter of hours or at most weeks. I agree that there is likely a lot of room for improvement in cognitive capability but the issue in this context is whether it is likely for that sort of improvement to occur quickly.
I agree with your other examples. And it is a valid point. I don’t think that a strong form P !=NP makes fooming impossible, just that it makes it much less likely. The chess example however has an issue that needs to be nitpicked. As I understand it, this isn’t really about linear gains in chess translating into exponential drop off but rather an artifact of the Elo system which sort of requires that linear increase corresponds to quick improvement.
The human field of AI is about half a million hours old, computer elements can operate at a million times human speed (given enough parallel elements). To the extent that many of the important discoveries were not limited by chip speeds but by the pace of CS, math, and AI researchers’ thinking (with most of the work done by some thousands of people who spent much of that time eating, sleeping, goofing off, getting up to speed on existing knowledge in the field).
With a big fast hardware base (relative to the program) and AI sophisticated enough to keep learning without continual human guidance and grok AI theory, gains comparable to the history of AI so far in a few hours or weeks would be reasonable from speedup alone.
I agree that one could have scenarios in which there are AI programs with humanlike capacities that are not yet capable of such development (e.g. a super-bloated system running on massive server farms). However, they tend to involve AI development happening very surprisingly quickly, and don’t seem stable for long (bloated implementations can be made more efficient, with strong positive feedback in the improvement, and superhuman hardware will come soon after powerful AI if not before).
I agree that this is true, but people often cite chess as an example where exponential hardware increases in the same algorithms led to only linear (Elo) gains.
Sure. But the end result of all that might end up be very small improvements in actual algorithmic efficiency. It might turn out for example that the best factoring algorithms are of the same order as the current sieves, and it might turn out that after thousands of additional hours of comp sci work the end result is a very difficult proof of that. If the complexity hierarchy doesn’t collapse in a strong sense, then even with lots of resources to spend just thinking about algorithms, the AI won’t improve the algorithms by that much in terms of actual speed, because they can’t be.
Yes, I agreed that we should expect this on some problems, but that we don’t have reason to expect it across most problems, weighted by practical impact. Especially so for the specific skills where humans greatly outperform computers, skills with great relevance for strategic advantage.
Do you think we have much reason to expect that the algorithms underlying human performance (in the problems where humans greatly outperform today’s AI) are mostly near optimal at what they do, such that AIs won’t have any areas of huge advantage to leverage?
I agree with the human skills. I disagree with the claim for problems by practical impact. For example, many practical problems turn out in the general cases to be NP hard or NP complete, or are believed to be not solvable in polynomial time. Examples include the traveling salesman and graph coloring both of which come up very frequently in practical applications across a wide range of contexts.
Many of those algorithms might be able to be optimized a lot. There’s an argument that we should expect humans to be near optimal (since we’ve spent a million years evolving to be really good at face recognition, understanding other human minds etc.) and our neural nets are trained from a very young age to do this. But there’s a lot of evidence that we are in fact suboptimal. Evidence for this includes Dunbar’s number and a lot of classical cognitive biases such as the illusion of transparency.
But a lot of those aren’t that relevant to fooming. Most humans can do facial recognition pretty fast and pretty reliably. If an AI can do that with a much tinier set of resources, more quickly and more reliably, that’s really neat but that isn’t going to help it go foom.
I’m not sure how to interpret what you’re saying. You say:
which sounds to me like a summary of long experience. But you also seem to be talking about a scenario which you cannot possibly have experienced even once. So, I’m not sure what you’re saying.
I’m saying that in my experience of people working out consistent scenarios that involve AI development with sustained scarcity, the scenarios offered usually involve the development of human-level AI early, before hardware can advance much further.
Also, regarding
This is people being stupid in one direction. This isn’t a good reason to be stupid in another direction. The simplest explanation is the Elo functions as something like a a log scale of actual ability.
Just to clarify, what do you mean by “actual ability’? In something like the 100 m dash, I can think of “actual ability” as finish time. We could construct an Elo rating based on head-to-head races of thousands of sprinters, and it wouldn’t be a log scale of finish times. Do you just mean percentile in the human distribution?
Yes (within some approximation. Weird things happen for very large or very small Elo values.)