Yeah, I think these are good points. However, I think that #1 is actually misleading. If we measure “work” in loss or in bits, then yes absolutely we can probably figure out the components that reduce loss the most. But lots of very important cognition goes into getting the last 0.01 bits of loss in LLMs, which can have big impacts on the capabilities of the model and the semantics of the outputs. I’m pessimistic on human-understanding based approaches to auditing such low-loss-high-complexity capabilities.
Yeah, I think these are good points. However, I think that #1 is actually misleading. If we measure “work” in loss or in bits, then yes absolutely we can probably figure out the components that reduce loss the most. But lots of very important cognition goes into getting the last 0.01 bits of loss in LLMs, which can have big impacts on the capabilities of the model and the semantics of the outputs. I’m pessimistic on human-understanding based approaches to auditing such low-loss-high-complexity capabilities.