as does making it clear that when you make a capability game like this, you are probably just contributing to capabilities
I would distinguish between measuring capabilities and improving capabilities. I agree that the former can motivate the latter, but they still seem importantly different. I continue to think that the alternative of not measuring capabilities (or only measuring some small subset that couldn’t be used as training benchmarks) just means we’re left in the dark about what these models can do, which seems pretty straightforwardly bad from a safety perspective.
not doing alignment
I agree that it’s definitely not doing alignment, and that working on alignment is the most important goal; I intend to shift toward directly working on alignment as I feel clearer about what work is a good bet (my current leading candidate, which I intend to focus on after this experiment: learning to better understand and shape LLMs’ self-models).
I very much appreciate the thoughtful critique, regardless of whether or not I’m convinced by it.
Agreed.
I would distinguish between measuring capabilities and improving capabilities. I agree that the former can motivate the latter, but they still seem importantly different. I continue to think that the alternative of not measuring capabilities (or only measuring some small subset that couldn’t be used as training benchmarks) just means we’re left in the dark about what these models can do, which seems pretty straightforwardly bad from a safety perspective.
I agree that it’s definitely not doing alignment, and that working on alignment is the most important goal; I intend to shift toward directly working on alignment as I feel clearer about what work is a good bet (my current leading candidate, which I intend to focus on after this experiment: learning to better understand and shape LLMs’ self-models).
I very much appreciate the thoughtful critique, regardless of whether or not I’m convinced by it.