Also, I think it’s much harder to pause/slow down capabilities than to accelerate safety, so I think more of the community focus should go to the latter.
And for now, it’s fortunate that inference scaling means CoT and similarly differentially transparent (vs. model internals) intermediate outputs, which makes it probably a safer way of eliciting capabilities.
Also, I think it’s much harder to pause/slow down capabilities than to accelerate safety, so I think more of the community focus should go to the latter.
And for now, it’s fortunate that inference scaling means CoT and similarly differentially transparent (vs. model internals) intermediate outputs, which makes it probably a safer way of eliciting capabilities.