Putting this short rant here for no particularly good reason but I dislike that people claim constraints here or there in a way where I guess their intended meaning is only that “the derivative with respect to that input is higher than for the other inputs”.
On factory floors there exist hard constraints, the throughput is limited by the slowest machine (when everything has to go through this). The AI Safety world is obviously not like that. Increase funding and more work gets done, increase talent and more work gets done. None are hard constraints.
If I’m right that people are really only claiming the weak version, then I’d like to see somewhat more backing to their claims, especially if you say “definitely”. Since none are constraints, the derivatives could plausibly be really close to one another. In fact, they kind of have to be, because there are smart optimizers who are deciding where to spend their funding and trying to actively manage the proportion of money sent to field building (getting more talent) vs direct work.
There is not a difference between the two situations in the way you’re claiming, and indeed the differentiation point of view is used fruitfully on both factory floors and in more complex convex optimization problems. For example, see the connection between dual variables and their indication of how slack or taught constraints are in convex optimization, and how this can be interpreted as a relative tradeoff price between each of the constrained resources.
In your factory floor example, the constraints would be the throughput of each machine, and (assuming you’re trying to maximize the throughput of the entire process), the dual variables would be zero everywhere except at that machine where it is the negative derivative of the throughput of the entire process with respect to the throughput of the constraining machine, and we could determine indeed the tight constraint is the throughput of the relevant machine by looking at the derivative which is significantly greater than all others.
Putting this short rant here for no particularly good reason but I dislike that people claim constraints here or there in a way where I guess their intended meaning is only that “the derivative with respect to that input is higher than for the other inputs”.
On factory floors there exist hard constraints, the throughput is limited by the slowest machine (when everything has to go through this). The AI Safety world is obviously not like that. Increase funding and more work gets done, increase talent and more work gets done. None are hard constraints.
If I’m right that people are really only claiming the weak version, then I’d like to see somewhat more backing to their claims, especially if you say “definitely”. Since none are constraints, the derivatives could plausibly be really close to one another. In fact, they kind of have to be, because there are smart optimizers who are deciding where to spend their funding and trying to actively manage the proportion of money sent to field building (getting more talent) vs direct work.
There is not a difference between the two situations in the way you’re claiming, and indeed the differentiation point of view is used fruitfully on both factory floors and in more complex convex optimization problems. For example, see the connection between dual variables and their indication of how slack or taught constraints are in convex optimization, and how this can be interpreted as a relative tradeoff price between each of the constrained resources.
In your factory floor example, the constraints would be the throughput of each machine, and (assuming you’re trying to maximize the throughput of the entire process), the dual variables would be zero everywhere except at that machine where it is the negative derivative of the throughput of the entire process with respect to the throughput of the constraining machine, and we could determine indeed the tight constraint is the throughput of the relevant machine by looking at the derivative which is significantly greater than all others.
Practical problems also often have a similar sparse structure to their constraining inputs too, but just because not every constraint is exactly zero except one doesn’t mean those non-zero constraints are secretly not actually constraining, or its unprincipled to use the same math or intuitions to reason about both situations.