I think the answer to the first question is that, as with every other (important) industry, the people in that industry will have the time and skill to notice the problems and start working on them. The FOOM argument says that a small group will form a singleton quickly, and so we need to do something special to ensure it goes well, and the non-FOOM argument is that AI is an industry like most others, and like most others it will not take over the world in a matter of months.
Where do you draw the line between “the people in that industry will have the time and skill to notice the problems and start working on them” and what is happening now, which is: some people in the industry (at least, you can’t argue DeepMind and OpenAI are not in the industry) noticed there is a problem and started working on it? Is it an accurate representation of the no-foom position to say, we should only start worrying when we literally observe a superhuman AI that is trying to take over the world? What if, AI takes years to gradually push humans to the sidelines, but the process in unstoppable because this time is not enough to solve alignment from scratch and the economic incentives to keep employing and developing AI are too strong to fight against?
Solving problems is mostly a matter of total resources devoted, not time devoted. Yes some problems have intrinsic clocks, but this doesn’t look like such a problem. If we get signs of a problem looming, and can devote a lot of resources then, that makes it tempting to save resources today for such a future push, as we’ll know a lot more then and resources today become more resources when delayed.
Solving problems is mostly a matter of total resources devoted, not time devoted. … If we get signs of a problem looming, and can devote a lot of resources then.
Hmm. I don’t have as strong opinions about this, but this premise doesn’t seem obviously true.
I’m thinking about the “is science slowing down?” question – pouring 1000x resources into various scientific fields didn’t result in 1000x speedups. In some cases progress seemed to slow down. The three main hypotheses I have are:
Low hanging fruit got used up, so the problems got harder
Average careerist scientists don’t matter much, only extremely talented, naturally motivated researchers matter. The naturally motivated researchers will do the work anyway.
Coordination is hard and scales with the number of people coordinating. If you have 1000x the researchers in a field, they can’t find each other’s best work that easily.
I agree that “timespent” isn’t the best metric, but it seems like what actually matters is “quality researcher hours that build on each other in the right way,” and it’s not obvious how much you can scale that.
If it’s just the low hanging fruit hypothesis then… that’s fine I guess. But if the “extreme talent/motivation” or “coordination” issues are at play, then you want (respectively) to ensure that:
a) at any given time, talented people who are naturally interested in the problem have the freedom to work on it, if there are nonzero things to do with it, since there won’t be that many of them in the future.
b) build better coordination tools so that people in the future can scale their efforts better.
(You may also want to make efforts not to get mediocre careerist scientists involved in the field)
FWIW another reason, somewhat similar to the low hanging fruit point, is that because the remaining problems are increasingly specialized, they require more years’ training before you can tackle them. I.e. not just harder to solve once you’ve started, but it takes longer for someone to get to the point where they can even start.
Also, I wonder if the increasing specialization means there are more problems to solve (albeit ever more niche), so people are being spread thinner among them. (Though conversely there are more people in the world, and many more scientists, than a century or two ago.)
I think that this problem is in the same broad category as “invent general relativity” or “prove the Poincare conjecture”. That is, for one thing quantity doesn’t easily replace talent (you couldn’t invent GR just as easily with 50 mediocre physicists instead of one Einstein), and, for another thing, the work is often hard to parallelize (50 Einsteins wouldn’t invent GR 50 times as fast). So, you can’t solve it just by spending lots of resources in a short time frame.
In software development, a perhaps relevant kind of problem solving, extra resources in the form of more programmers working on the same project doesn’t speed things up much. My guesstimate is output = time x log programmers. I assume the main reason being because there’s a limit to the extent that you can divide a project into independent parallel programming tasks. (Cf 9 women can’t make a baby in 1 month.)
Except that if the people are working in independent smaller teams, each trying to crack the same problem, and *if* the solution requires a single breakthrough (or a few?) which can be made by a smaller team (e.g. public key encryption, as opposed to landing a man on the moon), then presumably it’s proportional to the number of teams, because each has an independent probability of making the breakthrough. And it seems plausible that solving AI threats might be more like this.
I think the answer to the first question is that, as with every other (important) industry, the people in that industry will have the time and skill to notice the problems and start working on them. The FOOM argument says that a small group will form a singleton quickly, and so we need to do something special to ensure it goes well, and the non-FOOM argument is that AI is an industry like most others, and like most others it will not take over the world in a matter of months.
Where do you draw the line between “the people in that industry will have the time and skill to notice the problems and start working on them” and what is happening now, which is: some people in the industry (at least, you can’t argue DeepMind and OpenAI are not in the industry) noticed there is a problem and started working on it? Is it an accurate representation of the no-foom position to say, we should only start worrying when we literally observe a superhuman AI that is trying to take over the world? What if, AI takes years to gradually push humans to the sidelines, but the process in unstoppable because this time is not enough to solve alignment from scratch and the economic incentives to keep employing and developing AI are too strong to fight against?
Solving problems is mostly a matter of total resources devoted, not time devoted. Yes some problems have intrinsic clocks, but this doesn’t look like such a problem. If we get signs of a problem looming, and can devote a lot of resources then, that makes it tempting to save resources today for such a future push, as we’ll know a lot more then and resources today become more resources when delayed.
Hmm. I don’t have as strong opinions about this, but this premise doesn’t seem obviously true.
I’m thinking about the “is science slowing down?” question – pouring 1000x resources into various scientific fields didn’t result in 1000x speedups. In some cases progress seemed to slow down. The three main hypotheses I have are:
Low hanging fruit got used up, so the problems got harder
Average careerist scientists don’t matter much, only extremely talented, naturally motivated researchers matter. The naturally motivated researchers will do the work anyway.
Coordination is hard and scales with the number of people coordinating. If you have 1000x the researchers in a field, they can’t find each other’s best work that easily.
I agree that “time spent” isn’t the best metric, but it seems like what actually matters is “quality researcher hours that build on each other in the right way,” and it’s not obvious how much you can scale that.
If it’s just the low hanging fruit hypothesis then… that’s fine I guess. But if the “extreme talent/motivation” or “coordination” issues are at play, then you want (respectively) to ensure that:
a) at any given time, talented people who are naturally interested in the problem have the freedom to work on it, if there are nonzero things to do with it, since there won’t be that many of them in the future.
b) build better coordination tools so that people in the future can scale their efforts better.
(You may also want to make efforts not to get mediocre careerist scientists involved in the field)
FWIW another reason, somewhat similar to the low hanging fruit point, is that because the remaining problems are increasingly specialized, they require more years’ training before you can tackle them. I.e. not just harder to solve once you’ve started, but it takes longer for someone to get to the point where they can even start.
Also, I wonder if the increasing specialization means there are more problems to solve (albeit ever more niche), so people are being spread thinner among them. (Though conversely there are more people in the world, and many more scientists, than a century or two ago.)
I think that this problem is in the same broad category as “invent general relativity” or “prove the Poincare conjecture”. That is, for one thing quantity doesn’t easily replace talent (you couldn’t invent GR just as easily with 50 mediocre physicists instead of one Einstein), and, for another thing, the work is often hard to parallelize (50 Einsteins wouldn’t invent GR 50 times as fast). So, you can’t solve it just by spending lots of resources in a short time frame.
Yeah, I agree with this view and I believe it’s the most common view among MIRI folks.
In software development, a perhaps relevant kind of problem solving, extra resources in the form of more programmers working on the same project doesn’t speed things up much. My guesstimate is output = time x log programmers. I assume the main reason being because there’s a limit to the extent that you can divide a project into independent parallel programming tasks. (Cf 9 women can’t make a baby in 1 month.)
Except that if the people are working in independent smaller teams, each trying to crack the same problem, and *if* the solution requires a single breakthrough (or a few?) which can be made by a smaller team (e.g. public key encryption, as opposed to landing a man on the moon), then presumably it’s proportional to the number of teams, because each has an independent probability of making the breakthrough. And it seems plausible that solving AI threats might be more like this.