I think that if our future goes well, it will be because we found ways to align AI well enough, and/or because we coordinated politically to slow or stop AI advancement long enough to accomplish the alignment part
Agree
not because researchers avoided measured AI’s capabilities.
But differential technological development matters, as does making it clear that when you make a capability game like this, you are probably just contributing to capabilities, not doing alignment. I won’t say you should never do that, but I’ll say that’s what’s being done. I personally am all in on “we just need to solve alignment as fast as possible”. But I’ve been a capabilities nerd for a while before I was an alignment nerd, and when I see someone doing something that I feel like is accidentally a potentially significant little capabilities contribution, it seems worth pointing out that that’s what it is.
as does making it clear that when you make a capability game like this, you are probably just contributing to capabilities
I would distinguish between measuring capabilities and improving capabilities. I agree that the former can motivate the latter, but they still seem importantly different. I continue to think that the alternative of not measuring capabilities (or only measuring some small subset that couldn’t be used as training benchmarks) just means we’re left in the dark about what these models can do, which seems pretty straightforwardly bad from a safety perspective.
not doing alignment
I agree that it’s definitely not doing alignment, and that working on alignment is the most important goal; I intend to shift toward directly working on alignment as I feel clearer about what work is a good bet (my current leading candidate, which I intend to focus on after this experiment: learning to better understand and shape LLMs’ self-models).
I very much appreciate the thoughtful critique, regardless of whether or not I’m convinced by it.
Agree
But differential technological development matters, as does making it clear that when you make a capability game like this, you are probably just contributing to capabilities, not doing alignment. I won’t say you should never do that, but I’ll say that’s what’s being done. I personally am all in on “we just need to solve alignment as fast as possible”. But I’ve been a capabilities nerd for a while before I was an alignment nerd, and when I see someone doing something that I feel like is accidentally a potentially significant little capabilities contribution, it seems worth pointing out that that’s what it is.
Agreed.
I would distinguish between measuring capabilities and improving capabilities. I agree that the former can motivate the latter, but they still seem importantly different. I continue to think that the alternative of not measuring capabilities (or only measuring some small subset that couldn’t be used as training benchmarks) just means we’re left in the dark about what these models can do, which seems pretty straightforwardly bad from a safety perspective.
I agree that it’s definitely not doing alignment, and that working on alignment is the most important goal; I intend to shift toward directly working on alignment as I feel clearer about what work is a good bet (my current leading candidate, which I intend to focus on after this experiment: learning to better understand and shape LLMs’ self-models).
I very much appreciate the thoughtful critique, regardless of whether or not I’m convinced by it.