Sure, but the post sets up a hypothetical, so prompts its development, not denial, no matter how implausible.
I think scaling up generation of data that’s actually useful for more than robustness in language/​multimodal models is the only remaining milestone before AGIs. Learn on your effortful multistep thoughts about naturally sourced data, not just on the data itself. Alignment of this generated data is what makes or breaks the future. The current experiments are much easier, because the naturally sourced data is about as aligned as it gets, you just need to use it correctly, while generated data could systematically shift the targets of generalization.
Sure, but the post sets up a hypothetical, so prompts its development, not denial, no matter how implausible.
I think scaling up generation of data that’s actually useful for more than robustness in language/​multimodal models is the only remaining milestone before AGIs. Learn on your effortful multistep thoughts about naturally sourced data, not just on the data itself. Alignment of this generated data is what makes or breaks the future. The current experiments are much easier, because the naturally sourced data is about as aligned as it gets, you just need to use it correctly, while generated data could systematically shift the targets of generalization.