Mmm, nice. Thanks! I like your distinction also. I think yours is sufficiently different that we shouldn’t see the two sets of distinctions as competing.* A system which has an objective which would be capable on paper but isn’t capable in practice due to inner-alignment failures would be performance-uncompetitive but objective-competitive. For this reason I think we shouldn’t equate objective and performance competitiveness.
If operating an AI system turns out to be an important part of the cost, then cost+date competitiveness would turn out to be different from training competitiveness, because cost competitiveness includes whatever the relevant costs are. However I expect operating costs will be much less relevant to controlling the future than costs incurred during the creation of the system (all that training, data-gathering, infrastructure building, etc.) so I think the mapping between cost+date competitiveness and training competitiveness basically works.
*Insofar as they are competing, I still prefer mine; as you say, it applies to more than just prosaic AI alignment proposals. Moreover, it makes it easier for us to talk about competitions as well, e.g. “In the FOOM scenario we need to win a date competition; cost-competitiveness still matters but not as much.” Moreover cost, performance, and date are fairly self-explanatory terms, whereas as you point out “objective” is more opaque. Moreover I think it’s worth distinguishing between cost and date competitiveness; in some scenarios one will be much more important than the other, and of course the two kinds of competitiveness vary independently in AI safety schemes (indeed maybe they are mildly anti-correlated? Some schemes are fairly well-defined and codified already, but would require tons of compute, whereas other schemes are more vague and thus would require tons of tweaking and cautious testing to get right, but don’t take that much compute. I do like how your version maps more onto the inner vs. outer alignment distinction.
Mmm, nice. Thanks! I like your distinction also. I think yours is sufficiently different that we shouldn’t see the two sets of distinctions as competing.* A system which has an objective which would be capable on paper but isn’t capable in practice due to inner-alignment failures would be performance-uncompetitive but objective-competitive. For this reason I think we shouldn’t equate objective and performance competitiveness.
If operating an AI system turns out to be an important part of the cost, then cost+date competitiveness would turn out to be different from training competitiveness, because cost competitiveness includes whatever the relevant costs are. However I expect operating costs will be much less relevant to controlling the future than costs incurred during the creation of the system (all that training, data-gathering, infrastructure building, etc.) so I think the mapping between cost+date competitiveness and training competitiveness basically works.
*Insofar as they are competing, I still prefer mine; as you say, it applies to more than just prosaic AI alignment proposals. Moreover, it makes it easier for us to talk about competitions as well, e.g. “In the FOOM scenario we need to win a date competition; cost-competitiveness still matters but not as much.” Moreover cost, performance, and date are fairly self-explanatory terms, whereas as you point out “objective” is more opaque. Moreover I think it’s worth distinguishing between cost and date competitiveness; in some scenarios one will be much more important than the other, and of course the two kinds of competitiveness vary independently in AI safety schemes (indeed maybe they are mildly anti-correlated? Some schemes are fairly well-defined and codified already, but would require tons of compute, whereas other schemes are more vague and thus would require tons of tweaking and cautious testing to get right, but don’t take that much compute. I do like how your version maps more onto the inner vs. outer alignment distinction.