Even more so, I never understood why people believe that thinking about certain problems (e.g. AI Alignment) is more efficient than random at solving certain problems, given no evidence of it being so (and no potential evidence, since the problems are in the future).
The point of focusing on AI Alignment isn’t that it’s an efficient way to discover new technology but that it’s a way that makes it less likely that humanity will develop technology that destroys humanity.
A trade that makes us develop technology slower but increases the chances that humanity survives is worth it.
The point of focusing on AI Alignment isn’t that it’s an efficient way to discover new technology but that it’s a way that makes it less likely that humanity will develop technology that destroys humanity.
Is “proper alignment” not a feature of an AI system, i.e. something that has to be /invented/discovered/built/?
This sound like semantics vis-a-vis the potential stance I was referring to above.
It is a feature of the AI system but it’s very important to first discover proper alignment before discovering AGI. If you randomly go about making discoveries it’s more likely that you end up discovering AGI and ending humanity before discovering proper alignment.
The point of focusing on AI Alignment isn’t that it’s an efficient way to discover new technology but that it’s a way that makes it less likely that humanity will develop technology that destroys humanity.
A trade that makes us develop technology slower but increases the chances that humanity survives is worth it.
Is “proper alignment” not a feature of an AI system, i.e. something that has to be /invented/discovered/built/?
This sound like semantics vis-a-vis the potential stance I was referring to above.
It is a feature of the AI system but it’s very important to first discover proper alignment before discovering AGI. If you randomly go about making discoveries it’s more likely that you end up discovering AGI and ending humanity before discovering proper alignment.