The point of focusing on AI Alignment isn’t that it’s an efficient way to discover new technology but that it’s a way that makes it less likely that humanity will develop technology that destroys humanity.
Is “proper alignment” not a feature of an AI system, i.e. something that has to be /invented/discovered/built/?
This sound like semantics vis-a-vis the potential stance I was referring to above.
It is a feature of the AI system but it’s very important to first discover proper alignment before discovering AGI. If you randomly go about making discoveries it’s more likely that you end up discovering AGI and ending humanity before discovering proper alignment.
Is “proper alignment” not a feature of an AI system, i.e. something that has to be /invented/discovered/built/?
This sound like semantics vis-a-vis the potential stance I was referring to above.
It is a feature of the AI system but it’s very important to first discover proper alignment before discovering AGI. If you randomly go about making discoveries it’s more likely that you end up discovering AGI and ending humanity before discovering proper alignment.