In software development, a perhaps relevant kind of problem solving, extra resources in the form of more programmers working on the same project doesn’t speed things up much. My guesstimate is output = time x log programmers. I assume the main reason being because there’s a limit to the extent that you can divide a project into independent parallel programming tasks. (Cf 9 women can’t make a baby in 1 month.)
Except that if the people are working in independent smaller teams, each trying to crack the same problem, and *if* the solution requires a single breakthrough (or a few?) which can be made by a smaller team (e.g. public key encryption, as opposed to landing a man on the moon), then presumably it’s proportional to the number of teams, because each has an independent probability of making the breakthrough. And it seems plausible that solving AI threats might be more like this.
In software development, a perhaps relevant kind of problem solving, extra resources in the form of more programmers working on the same project doesn’t speed things up much. My guesstimate is output = time x log programmers. I assume the main reason being because there’s a limit to the extent that you can divide a project into independent parallel programming tasks. (Cf 9 women can’t make a baby in 1 month.)
Except that if the people are working in independent smaller teams, each trying to crack the same problem, and *if* the solution requires a single breakthrough (or a few?) which can be made by a smaller team (e.g. public key encryption, as opposed to landing a man on the moon), then presumably it’s proportional to the number of teams, because each has an independent probability of making the breakthrough. And it seems plausible that solving AI threats might be more like this.