Again trying to answer this one despite not feeling fully solid. I’m not sure about the second proposal and might come back to it, but here’s my response to the first proposal (force ontological compatibility):
The counterexample “Gradient descent is more efficient than science” should cover this proposal because it implies that the proposal is uncompetitive. Basically, the best Bayes net for making predictions could just turn out to be the super incomprehensible one found by unrestricted gradient descent, so if you force ontological compatibility then you could just end up with a less-good prediction model and get outcompeted by someone who didn’t do that. This might work in practice if the competitiveness hit is not that big and we coordinate around not doing the scarier thing (MIRI’s visible thoughts project is going for something like this), but ARC isn’t looking for a solution of that form.
I’m not sure why this isn’t a very general counterexample. Once we’ve decided that the human imitator is simpler and faster to compute, don’t all further approaches (e.g., penalizing inconsistency) involve a competitiveness hit along these general lines? Aren’t they basically designed to drag the AI away from a fast, simple human imitator toward a slow, complex reporter? If so, why is that better than dragging the AI from a foreign ontology toward a familiar ontology?
There is a distinction between the way that the predictor is reasoning and the way that the reporter works. Generally, we imagine that that the predictor is trained the same way the “unaligned benchmark” we’re trying to compare to is trained, and the reporter is the thing that we add onto that to “align” it (perhaps by only training another head on the model, perhaps by finetuning). Hopefully, the cost of training the reporter is small compared to the cost of the predictor (maybe like 10% or something)
In this frame, doing anything to train the way the predictor is trained results in a big competitiveness hit, e.g. forcing the predictor to use the same ontology as a human is potentially going to prevent it from using concepts that make reasoning much more efficient. However, training the reporter in a different way, e.g. doubling the cost of training the reporter, only takes you from 10% of the predictor to 20%, which not that bad of a competitiveness hit (assuming that the human imitator takes 10% of the cost of the original predictor to train).
In summary, competitiveness for ELK proposals primarily means that you can’t change the way the predictor was trained. We are already assuming/hoping the reporter is much cheaper to train than the predictor, so making the reporter harder to train results in a much smaller competitiveness hit.
Again trying to answer this one despite not feeling fully solid. I’m not sure about the second proposal and might come back to it, but here’s my response to the first proposal (force ontological compatibility):
The counterexample “Gradient descent is more efficient than science” should cover this proposal because it implies that the proposal is uncompetitive. Basically, the best Bayes net for making predictions could just turn out to be the super incomprehensible one found by unrestricted gradient descent, so if you force ontological compatibility then you could just end up with a less-good prediction model and get outcompeted by someone who didn’t do that. This might work in practice if the competitiveness hit is not that big and we coordinate around not doing the scarier thing (MIRI’s visible thoughts project is going for something like this), but ARC isn’t looking for a solution of that form.
I’m not sure why this isn’t a very general counterexample. Once we’ve decided that the human imitator is simpler and faster to compute, don’t all further approaches (e.g., penalizing inconsistency) involve a competitiveness hit along these general lines? Aren’t they basically designed to drag the AI away from a fast, simple human imitator toward a slow, complex reporter? If so, why is that better than dragging the AI from a foreign ontology toward a familiar ontology?
There is a distinction between the way that the predictor is reasoning and the way that the reporter works. Generally, we imagine that that the predictor is trained the same way the “unaligned benchmark” we’re trying to compare to is trained, and the reporter is the thing that we add onto that to “align” it (perhaps by only training another head on the model, perhaps by finetuning). Hopefully, the cost of training the reporter is small compared to the cost of the predictor (maybe like 10% or something)
In this frame, doing anything to train the way the predictor is trained results in a big competitiveness hit, e.g. forcing the predictor to use the same ontology as a human is potentially going to prevent it from using concepts that make reasoning much more efficient. However, training the reporter in a different way, e.g. doubling the cost of training the reporter, only takes you from 10% of the predictor to 20%, which not that bad of a competitiveness hit (assuming that the human imitator takes 10% of the cost of the original predictor to train).
In summary, competitiveness for ELK proposals primarily means that you can’t change the way the predictor was trained. We are already assuming/hoping the reporter is much cheaper to train than the predictor, so making the reporter harder to train results in a much smaller competitiveness hit.